00:00:00.001 Started by upstream project "autotest-nightly" build number 4309 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3672 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.268 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.268 The recommended git tool is: git 00:00:00.268 using credential 00000000-0000-0000-0000-000000000002 00:00:00.270 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.313 Fetching changes from the remote Git repository 00:00:00.316 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.341 Using shallow fetch with depth 1 00:00:00.341 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.341 > git --version # timeout=10 00:00:00.356 > git --version # 'git version 2.39.2' 00:00:00.356 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.367 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.367 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.620 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.632 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.644 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.644 > git config core.sparsecheckout # timeout=10 00:00:08.657 > git read-tree -mu HEAD # timeout=10 00:00:08.673 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.701 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.702 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.797 [Pipeline] Start of Pipeline 00:00:08.808 [Pipeline] library 00:00:08.810 Loading library shm_lib@master 00:00:08.810 Library shm_lib@master is cached. Copying from home. 00:00:08.824 [Pipeline] node 00:00:08.834 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.836 [Pipeline] { 00:00:08.849 [Pipeline] catchError 00:00:08.851 [Pipeline] { 00:00:08.862 [Pipeline] wrap 00:00:08.869 [Pipeline] { 00:00:08.876 [Pipeline] stage 00:00:08.878 [Pipeline] { (Prologue) 00:00:08.897 [Pipeline] echo 00:00:08.898 Node: VM-host-WFP1 00:00:08.905 [Pipeline] cleanWs 00:00:08.915 [WS-CLEANUP] Deleting project workspace... 00:00:08.915 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.921 [WS-CLEANUP] done 00:00:09.126 [Pipeline] setCustomBuildProperty 00:00:09.201 [Pipeline] httpRequest 00:00:09.740 [Pipeline] echo 00:00:09.741 Sorcerer 10.211.164.101 is alive 00:00:09.749 [Pipeline] retry 00:00:09.750 [Pipeline] { 00:00:09.762 [Pipeline] httpRequest 00:00:09.767 HttpMethod: GET 00:00:09.768 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.768 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.802 Response Code: HTTP/1.1 200 OK 00:00:09.803 Success: Status code 200 is in the accepted range: 200,404 00:00:09.804 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:45.442 [Pipeline] } 00:00:45.460 [Pipeline] // retry 00:00:45.469 [Pipeline] sh 00:00:45.751 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:45.769 [Pipeline] httpRequest 00:00:46.233 [Pipeline] echo 00:00:46.236 Sorcerer 10.211.164.101 is alive 00:00:46.247 [Pipeline] retry 00:00:46.250 [Pipeline] { 00:00:46.265 [Pipeline] httpRequest 00:00:46.270 HttpMethod: GET 00:00:46.271 URL: http://10.211.164.101/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:46.271 Sending request to url: http://10.211.164.101/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:46.279 Response Code: HTTP/1.1 200 OK 00:00:46.279 Success: Status code 200 is in the accepted range: 200,404 00:00:46.280 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:39.382 [Pipeline] } 00:01:39.399 [Pipeline] // retry 00:01:39.407 [Pipeline] sh 00:01:39.688 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:42.236 [Pipeline] sh 00:01:42.514 + git -C spdk log --oneline -n5 00:01:42.514 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:42.514 5592070b3 doc: update nvmf_tracing.md 00:01:42.514 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:42.514 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:42.514 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:01:42.529 [Pipeline] writeFile 00:01:42.539 [Pipeline] sh 00:01:42.817 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:42.829 [Pipeline] sh 00:01:43.108 + cat autorun-spdk.conf 00:01:43.109 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.109 SPDK_TEST_NVME=1 00:01:43.109 SPDK_TEST_FTL=1 00:01:43.109 SPDK_TEST_ISAL=1 00:01:43.109 SPDK_RUN_ASAN=1 00:01:43.109 SPDK_RUN_UBSAN=1 00:01:43.109 SPDK_TEST_XNVME=1 00:01:43.109 SPDK_TEST_NVME_FDP=1 00:01:43.109 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.116 RUN_NIGHTLY=1 00:01:43.118 [Pipeline] } 00:01:43.132 [Pipeline] // stage 00:01:43.146 [Pipeline] stage 00:01:43.148 [Pipeline] { (Run VM) 00:01:43.160 [Pipeline] sh 00:01:43.470 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:43.470 + echo 'Start stage prepare_nvme.sh' 00:01:43.470 Start stage prepare_nvme.sh 00:01:43.470 + [[ -n 6 ]] 00:01:43.470 + disk_prefix=ex6 00:01:43.470 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:43.470 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:43.470 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:43.470 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.470 ++ SPDK_TEST_NVME=1 00:01:43.470 ++ SPDK_TEST_FTL=1 00:01:43.470 ++ SPDK_TEST_ISAL=1 00:01:43.470 ++ SPDK_RUN_ASAN=1 00:01:43.470 ++ SPDK_RUN_UBSAN=1 00:01:43.470 ++ SPDK_TEST_XNVME=1 00:01:43.470 ++ SPDK_TEST_NVME_FDP=1 00:01:43.470 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.470 ++ RUN_NIGHTLY=1 00:01:43.470 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:43.470 + nvme_files=() 00:01:43.470 + declare -A nvme_files 00:01:43.470 + backend_dir=/var/lib/libvirt/images/backends 00:01:43.470 + nvme_files['nvme.img']=5G 00:01:43.470 + nvme_files['nvme-cmb.img']=5G 00:01:43.470 + nvme_files['nvme-multi0.img']=4G 00:01:43.470 + nvme_files['nvme-multi1.img']=4G 00:01:43.470 + nvme_files['nvme-multi2.img']=4G 00:01:43.470 + nvme_files['nvme-openstack.img']=8G 00:01:43.470 + nvme_files['nvme-zns.img']=5G 00:01:43.470 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:43.470 + (( SPDK_TEST_FTL == 1 )) 00:01:43.470 + nvme_files["nvme-ftl.img"]=6G 00:01:43.470 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:43.470 + nvme_files["nvme-fdp.img"]=1G 00:01:43.470 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:43.470 + for nvme in "${!nvme_files[@]}" 00:01:43.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:43.470 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.470 + for nvme in "${!nvme_files[@]}" 00:01:43.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:43.470 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:43.470 + for nvme in "${!nvme_files[@]}" 00:01:43.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:43.470 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.470 + for nvme in "${!nvme_files[@]}" 00:01:43.470 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:43.729 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:43.729 + for nvme in "${!nvme_files[@]}" 00:01:43.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:43.729 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.729 + for nvme in "${!nvme_files[@]}" 00:01:43.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:43.729 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.729 + for nvme in "${!nvme_files[@]}" 00:01:43.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:43.729 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.729 + for nvme in "${!nvme_files[@]}" 00:01:43.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:43.729 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:43.729 + for nvme in "${!nvme_files[@]}" 00:01:43.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:44.000 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:44.000 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:44.000 + echo 'End stage prepare_nvme.sh' 00:01:44.000 End stage prepare_nvme.sh 00:01:44.011 [Pipeline] sh 00:01:44.295 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:44.295 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:44.295 00:01:44.295 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:44.295 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:44.295 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:44.295 HELP=0 00:01:44.295 DRY_RUN=0 00:01:44.295 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:44.295 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:44.295 NVME_AUTO_CREATE=0 00:01:44.295 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:44.295 NVME_CMB=,,,, 00:01:44.295 NVME_PMR=,,,, 00:01:44.295 NVME_ZNS=,,,, 00:01:44.295 NVME_MS=true,,,, 00:01:44.295 NVME_FDP=,,,on, 00:01:44.295 SPDK_VAGRANT_DISTRO=fedora39 00:01:44.295 SPDK_VAGRANT_VMCPU=10 00:01:44.295 SPDK_VAGRANT_VMRAM=12288 00:01:44.295 SPDK_VAGRANT_PROVIDER=libvirt 00:01:44.295 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:44.295 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:44.295 SPDK_OPENSTACK_NETWORK=0 00:01:44.295 VAGRANT_PACKAGE_BOX=0 00:01:44.295 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:44.295 FORCE_DISTRO=true 00:01:44.295 VAGRANT_BOX_VERSION= 00:01:44.295 EXTRA_VAGRANTFILES= 00:01:44.295 NIC_MODEL=e1000 00:01:44.295 00:01:44.295 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:44.295 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:46.834 Bringing machine 'default' up with 'libvirt' provider... 00:01:48.212 ==> default: Creating image (snapshot of base box volume). 00:01:48.212 ==> default: Creating domain with the following settings... 00:01:48.212 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732707937_1685d69995cb47de7324 00:01:48.212 ==> default: -- Domain type: kvm 00:01:48.212 ==> default: -- Cpus: 10 00:01:48.212 ==> default: -- Feature: acpi 00:01:48.212 ==> default: -- Feature: apic 00:01:48.212 ==> default: -- Feature: pae 00:01:48.212 ==> default: -- Memory: 12288M 00:01:48.212 ==> default: -- Memory Backing: hugepages: 00:01:48.212 ==> default: -- Management MAC: 00:01:48.212 ==> default: -- Loader: 00:01:48.212 ==> default: -- Nvram: 00:01:48.212 ==> default: -- Base box: spdk/fedora39 00:01:48.212 ==> default: -- Storage pool: default 00:01:48.212 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732707937_1685d69995cb47de7324.img (20G) 00:01:48.212 ==> default: -- Volume Cache: default 00:01:48.212 ==> default: -- Kernel: 00:01:48.212 ==> default: -- Initrd: 00:01:48.212 ==> default: -- Graphics Type: vnc 00:01:48.212 ==> default: -- Graphics Port: -1 00:01:48.212 ==> default: -- Graphics IP: 127.0.0.1 00:01:48.212 ==> default: -- Graphics Password: Not defined 00:01:48.212 ==> default: -- Video Type: cirrus 00:01:48.212 ==> default: -- Video VRAM: 9216 00:01:48.212 ==> default: -- Sound Type: 00:01:48.212 ==> default: -- Keymap: en-us 00:01:48.212 ==> default: -- TPM Path: 00:01:48.212 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:48.212 ==> default: -- Command line args: 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:48.212 ==> default: -> value=-drive, 00:01:48.212 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:48.212 ==> default: -> value=-device, 00:01:48.212 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.781 ==> default: Creating shared folders metadata... 00:01:48.781 ==> default: Starting domain. 00:01:50.686 ==> default: Waiting for domain to get an IP address... 00:02:05.575 ==> default: Waiting for SSH to become available... 00:02:07.481 ==> default: Configuring and enabling network interfaces... 00:02:12.755 default: SSH address: 192.168.121.139:22 00:02:12.755 default: SSH username: vagrant 00:02:12.755 default: SSH auth method: private key 00:02:15.289 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:25.267 ==> default: Mounting SSHFS shared folder... 00:02:26.263 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.263 ==> default: Checking Mount.. 00:02:28.171 ==> default: Folder Successfully Mounted! 00:02:28.171 ==> default: Running provisioner: file... 00:02:29.109 default: ~/.gitconfig => .gitconfig 00:02:29.677 00:02:29.677 SUCCESS! 00:02:29.677 00:02:29.677 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:29.677 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:29.677 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:29.677 00:02:29.687 [Pipeline] } 00:02:29.703 [Pipeline] // stage 00:02:29.714 [Pipeline] dir 00:02:29.714 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:29.717 [Pipeline] { 00:02:29.730 [Pipeline] catchError 00:02:29.731 [Pipeline] { 00:02:29.743 [Pipeline] sh 00:02:30.023 + vagrant ssh-config --host vagrant 00:02:30.023 + sed -ne /^Host/,$p 00:02:30.023 + tee ssh_conf 00:02:33.315 Host vagrant 00:02:33.315 HostName 192.168.121.139 00:02:33.315 User vagrant 00:02:33.315 Port 22 00:02:33.315 UserKnownHostsFile /dev/null 00:02:33.315 StrictHostKeyChecking no 00:02:33.315 PasswordAuthentication no 00:02:33.315 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:33.315 IdentitiesOnly yes 00:02:33.315 LogLevel FATAL 00:02:33.315 ForwardAgent yes 00:02:33.315 ForwardX11 yes 00:02:33.315 00:02:33.330 [Pipeline] withEnv 00:02:33.332 [Pipeline] { 00:02:33.347 [Pipeline] sh 00:02:33.628 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:33.628 source /etc/os-release 00:02:33.628 [[ -e /image.version ]] && img=$(< /image.version) 00:02:33.628 # Minimal, systemd-like check. 00:02:33.628 if [[ -e /.dockerenv ]]; then 00:02:33.628 # Clear garbage from the node's name: 00:02:33.628 # agt-er_autotest_547-896 -> autotest_547-896 00:02:33.628 # $HOSTNAME is the actual container id 00:02:33.628 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:33.628 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:33.628 # We can assume this is a mount from a host where container is running, 00:02:33.629 # so fetch its hostname to easily identify the target swarm worker. 00:02:33.629 container="$(< /etc/hostname) ($agent)" 00:02:33.629 else 00:02:33.629 # Fallback 00:02:33.629 container=$agent 00:02:33.629 fi 00:02:33.629 fi 00:02:33.629 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:33.629 00:02:33.900 [Pipeline] } 00:02:33.917 [Pipeline] // withEnv 00:02:33.926 [Pipeline] setCustomBuildProperty 00:02:33.942 [Pipeline] stage 00:02:33.945 [Pipeline] { (Tests) 00:02:33.963 [Pipeline] sh 00:02:34.249 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:34.525 [Pipeline] sh 00:02:34.810 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:35.089 [Pipeline] timeout 00:02:35.090 Timeout set to expire in 50 min 00:02:35.093 [Pipeline] { 00:02:35.109 [Pipeline] sh 00:02:35.392 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:35.961 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:35.973 [Pipeline] sh 00:02:36.255 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:36.530 [Pipeline] sh 00:02:36.813 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:37.090 [Pipeline] sh 00:02:37.372 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:37.632 ++ readlink -f spdk_repo 00:02:37.632 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:37.632 + [[ -n /home/vagrant/spdk_repo ]] 00:02:37.632 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:37.632 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:37.632 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:37.632 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:37.632 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:37.632 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:37.632 + cd /home/vagrant/spdk_repo 00:02:37.632 + source /etc/os-release 00:02:37.632 ++ NAME='Fedora Linux' 00:02:37.632 ++ VERSION='39 (Cloud Edition)' 00:02:37.632 ++ ID=fedora 00:02:37.632 ++ VERSION_ID=39 00:02:37.632 ++ VERSION_CODENAME= 00:02:37.632 ++ PLATFORM_ID=platform:f39 00:02:37.632 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:37.632 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:37.632 ++ LOGO=fedora-logo-icon 00:02:37.632 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:37.632 ++ HOME_URL=https://fedoraproject.org/ 00:02:37.632 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:37.632 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:37.632 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:37.632 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:37.633 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:37.633 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:37.633 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:37.633 ++ SUPPORT_END=2024-11-12 00:02:37.633 ++ VARIANT='Cloud Edition' 00:02:37.633 ++ VARIANT_ID=cloud 00:02:37.633 + uname -a 00:02:37.633 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:37.633 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:38.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:38.463 Hugepages 00:02:38.463 node hugesize free / total 00:02:38.463 node0 1048576kB 0 / 0 00:02:38.463 node0 2048kB 0 / 0 00:02:38.463 00:02:38.463 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.463 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:38.463 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:38.463 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:38.463 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:38.723 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:38.723 + rm -f /tmp/spdk-ld-path 00:02:38.723 + source autorun-spdk.conf 00:02:38.723 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.723 ++ SPDK_TEST_NVME=1 00:02:38.723 ++ SPDK_TEST_FTL=1 00:02:38.723 ++ SPDK_TEST_ISAL=1 00:02:38.723 ++ SPDK_RUN_ASAN=1 00:02:38.723 ++ SPDK_RUN_UBSAN=1 00:02:38.723 ++ SPDK_TEST_XNVME=1 00:02:38.723 ++ SPDK_TEST_NVME_FDP=1 00:02:38.723 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.723 ++ RUN_NIGHTLY=1 00:02:38.723 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:38.723 + [[ -n '' ]] 00:02:38.723 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:38.723 + for M in /var/spdk/build-*-manifest.txt 00:02:38.723 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:38.723 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.723 + for M in /var/spdk/build-*-manifest.txt 00:02:38.723 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:38.723 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.723 + for M in /var/spdk/build-*-manifest.txt 00:02:38.723 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:38.723 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.723 ++ uname 00:02:38.723 + [[ Linux == \L\i\n\u\x ]] 00:02:38.723 + sudo dmesg -T 00:02:38.723 + sudo dmesg --clear 00:02:38.723 + dmesg_pid=5245 00:02:38.723 + [[ Fedora Linux == FreeBSD ]] 00:02:38.723 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.723 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.723 + sudo dmesg -Tw 00:02:38.723 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:38.723 + [[ -x /usr/src/fio-static/fio ]] 00:02:38.723 + export FIO_BIN=/usr/src/fio-static/fio 00:02:38.723 + FIO_BIN=/usr/src/fio-static/fio 00:02:38.723 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:38.723 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:38.723 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:38.723 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.723 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.723 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:38.723 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.723 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.723 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.982 11:46:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:38.982 11:46:28 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.982 11:46:28 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:38.982 11:46:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:38.982 11:46:28 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.982 11:46:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:38.982 11:46:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:38.982 11:46:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:38.982 11:46:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:38.982 11:46:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.982 11:46:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.982 11:46:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.982 11:46:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.982 11:46:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.982 11:46:28 -- paths/export.sh@5 -- $ export PATH 00:02:38.982 11:46:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.982 11:46:28 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:38.982 11:46:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:38.982 11:46:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732707988.XXXXXX 00:02:38.982 11:46:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732707988.JRd5tX 00:02:38.982 11:46:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:38.982 11:46:28 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:38.982 11:46:28 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:38.982 11:46:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:38.982 11:46:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:38.982 11:46:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:38.982 11:46:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:38.982 11:46:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.982 11:46:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:38.982 11:46:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:38.982 11:46:28 -- pm/common@17 -- $ local monitor 00:02:38.982 11:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.982 11:46:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.982 11:46:28 -- pm/common@25 -- $ sleep 1 00:02:38.982 11:46:28 -- pm/common@21 -- $ date +%s 00:02:38.982 11:46:28 -- pm/common@21 -- $ date +%s 00:02:38.982 11:46:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732707988 00:02:38.982 11:46:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732707988 00:02:39.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732707988_collect-vmstat.pm.log 00:02:39.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732707988_collect-cpu-load.pm.log 00:02:40.190 11:46:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:40.190 11:46:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:40.190 11:46:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:40.190 11:46:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:40.190 11:46:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:40.190 Wed Nov 27 11:46:29 AM UTC 2024 00:02:40.190 11:46:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:40.190 v25.01-pre-271-g2f2acf4eb 00:02:40.190 11:46:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:40.190 11:46:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:40.190 11:46:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.190 11:46:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.190 11:46:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.190 ************************************ 00:02:40.190 START TEST asan 00:02:40.190 ************************************ 00:02:40.190 using asan 00:02:40.190 11:46:30 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:40.190 00:02:40.190 real 0m0.000s 00:02:40.190 user 0m0.000s 00:02:40.190 sys 0m0.000s 00:02:40.190 11:46:30 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.190 11:46:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.190 ************************************ 00:02:40.190 END TEST asan 00:02:40.190 ************************************ 00:02:40.190 11:46:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:40.190 11:46:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:40.190 11:46:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.190 11:46:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.190 11:46:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.190 ************************************ 00:02:40.190 START TEST ubsan 00:02:40.190 ************************************ 00:02:40.190 using ubsan 00:02:40.190 11:46:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:40.190 00:02:40.190 real 0m0.000s 00:02:40.190 user 0m0.000s 00:02:40.190 sys 0m0.000s 00:02:40.190 11:46:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.190 11:46:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.190 ************************************ 00:02:40.190 END TEST ubsan 00:02:40.190 ************************************ 00:02:40.190 11:46:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:40.190 11:46:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:40.190 11:46:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:40.190 11:46:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:40.449 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:40.449 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.032 Using 'verbs' RDMA provider 00:02:56.866 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:14.957 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:14.957 Creating mk/config.mk...done. 00:03:14.957 Creating mk/cc.flags.mk...done. 00:03:14.957 Type 'make' to build. 00:03:14.957 11:47:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:14.957 11:47:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:14.957 11:47:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:14.957 11:47:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.957 ************************************ 00:03:14.957 START TEST make 00:03:14.957 ************************************ 00:03:14.957 11:47:02 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:14.957 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:14.957 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:14.957 meson setup builddir \ 00:03:14.957 -Dwith-libaio=enabled \ 00:03:14.957 -Dwith-liburing=enabled \ 00:03:14.957 -Dwith-libvfn=disabled \ 00:03:14.957 -Dwith-spdk=disabled \ 00:03:14.957 -Dexamples=false \ 00:03:14.957 -Dtests=false \ 00:03:14.957 -Dtools=false && \ 00:03:14.957 meson compile -C builddir && \ 00:03:14.957 cd -) 00:03:14.957 make[1]: Nothing to be done for 'all'. 00:03:15.525 The Meson build system 00:03:15.525 Version: 1.5.0 00:03:15.525 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:15.525 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.525 Build type: native build 00:03:15.525 Project name: xnvme 00:03:15.525 Project version: 0.7.5 00:03:15.525 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:15.525 C linker for the host machine: cc ld.bfd 2.40-14 00:03:15.525 Host machine cpu family: x86_64 00:03:15.525 Host machine cpu: x86_64 00:03:15.525 Message: host_machine.system: linux 00:03:15.525 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:15.525 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:15.525 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:15.525 Run-time dependency threads found: YES 00:03:15.525 Has header "setupapi.h" : NO 00:03:15.525 Has header "linux/blkzoned.h" : YES 00:03:15.525 Has header "linux/blkzoned.h" : YES (cached) 00:03:15.525 Has header "libaio.h" : YES 00:03:15.525 Library aio found: YES 00:03:15.525 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:15.525 Run-time dependency liburing found: YES 2.2 00:03:15.525 Dependency libvfn skipped: feature with-libvfn disabled 00:03:15.525 Found CMake: /usr/bin/cmake (3.27.7) 00:03:15.525 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:15.525 Subproject spdk : skipped: feature with-spdk disabled 00:03:15.526 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.526 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.526 Library rt found: YES 00:03:15.526 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:15.526 Configuring xnvme_config.h using configuration 00:03:15.526 Configuring xnvme.spec using configuration 00:03:15.526 Run-time dependency bash-completion found: YES 2.11 00:03:15.526 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:15.526 Program cp found: YES (/usr/bin/cp) 00:03:15.526 Build targets in project: 3 00:03:15.526 00:03:15.526 xnvme 0.7.5 00:03:15.526 00:03:15.526 Subprojects 00:03:15.526 spdk : NO Feature 'with-spdk' disabled 00:03:15.526 00:03:15.526 User defined options 00:03:15.526 examples : false 00:03:15.526 tests : false 00:03:15.526 tools : false 00:03:15.526 with-libaio : enabled 00:03:15.526 with-liburing: enabled 00:03:15.526 with-libvfn : disabled 00:03:15.526 with-spdk : disabled 00:03:15.526 00:03:15.526 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:15.785 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:15.785 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:16.045 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:16.045 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:16.045 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:16.045 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:16.045 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:16.045 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:16.045 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:16.045 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:16.045 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:16.045 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:16.045 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:16.045 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:16.045 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:16.045 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:16.045 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:16.045 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:16.045 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:16.045 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:16.045 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:16.045 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:16.045 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:16.045 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:16.045 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:16.305 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:16.305 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:16.305 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:16.305 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:16.305 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:16.305 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:16.305 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:16.305 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:16.305 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:16.305 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:16.305 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:16.305 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:16.305 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:16.305 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:16.305 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:16.305 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:16.305 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:16.305 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:16.305 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:16.305 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:16.305 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:16.305 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:16.305 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:16.305 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:16.305 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:16.305 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:16.305 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:16.306 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:16.306 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:16.306 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:16.306 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:16.306 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:16.306 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:16.306 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:16.564 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:16.564 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:16.564 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:16.564 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:16.564 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:16.564 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:16.564 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:16.564 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:16.564 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:16.565 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:16.565 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:16.565 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:16.824 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:16.824 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:16.824 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:17.082 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:17.083 [75/76] Linking static target lib/libxnvme.a 00:03:17.083 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:17.083 INFO: autodetecting backend as ninja 00:03:17.083 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.341 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:23.978 The Meson build system 00:03:23.978 Version: 1.5.0 00:03:23.978 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:23.978 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:23.978 Build type: native build 00:03:23.978 Program cat found: YES (/usr/bin/cat) 00:03:23.978 Project name: DPDK 00:03:23.978 Project version: 24.03.0 00:03:23.978 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:23.978 C linker for the host machine: cc ld.bfd 2.40-14 00:03:23.978 Host machine cpu family: x86_64 00:03:23.978 Host machine cpu: x86_64 00:03:23.978 Message: ## Building in Developer Mode ## 00:03:23.978 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:23.978 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:23.978 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:23.978 Program python3 found: YES (/usr/bin/python3) 00:03:23.978 Program cat found: YES (/usr/bin/cat) 00:03:23.978 Compiler for C supports arguments -march=native: YES 00:03:23.978 Checking for size of "void *" : 8 00:03:23.978 Checking for size of "void *" : 8 (cached) 00:03:23.978 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:23.978 Library m found: YES 00:03:23.978 Library numa found: YES 00:03:23.978 Has header "numaif.h" : YES 00:03:23.978 Library fdt found: NO 00:03:23.978 Library execinfo found: NO 00:03:23.978 Has header "execinfo.h" : YES 00:03:23.978 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:23.978 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:23.978 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:23.978 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:23.978 Run-time dependency openssl found: YES 3.1.1 00:03:23.978 Run-time dependency libpcap found: YES 1.10.4 00:03:23.978 Has header "pcap.h" with dependency libpcap: YES 00:03:23.978 Compiler for C supports arguments -Wcast-qual: YES 00:03:23.978 Compiler for C supports arguments -Wdeprecated: YES 00:03:23.978 Compiler for C supports arguments -Wformat: YES 00:03:23.978 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:23.978 Compiler for C supports arguments -Wformat-security: NO 00:03:23.978 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:23.978 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:23.978 Compiler for C supports arguments -Wnested-externs: YES 00:03:23.978 Compiler for C supports arguments -Wold-style-definition: YES 00:03:23.978 Compiler for C supports arguments -Wpointer-arith: YES 00:03:23.978 Compiler for C supports arguments -Wsign-compare: YES 00:03:23.978 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:23.978 Compiler for C supports arguments -Wundef: YES 00:03:23.978 Compiler for C supports arguments -Wwrite-strings: YES 00:03:23.978 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:23.978 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:23.978 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:23.978 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:23.978 Program objdump found: YES (/usr/bin/objdump) 00:03:23.978 Compiler for C supports arguments -mavx512f: YES 00:03:23.978 Checking if "AVX512 checking" compiles: YES 00:03:23.978 Fetching value of define "__SSE4_2__" : 1 00:03:23.978 Fetching value of define "__AES__" : 1 00:03:23.978 Fetching value of define "__AVX__" : 1 00:03:23.978 Fetching value of define "__AVX2__" : 1 00:03:23.978 Fetching value of define "__AVX512BW__" : 1 00:03:23.978 Fetching value of define "__AVX512CD__" : 1 00:03:23.978 Fetching value of define "__AVX512DQ__" : 1 00:03:23.978 Fetching value of define "__AVX512F__" : 1 00:03:23.978 Fetching value of define "__AVX512VL__" : 1 00:03:23.978 Fetching value of define "__PCLMUL__" : 1 00:03:23.978 Fetching value of define "__RDRND__" : 1 00:03:23.978 Fetching value of define "__RDSEED__" : 1 00:03:23.978 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:23.978 Fetching value of define "__znver1__" : (undefined) 00:03:23.978 Fetching value of define "__znver2__" : (undefined) 00:03:23.978 Fetching value of define "__znver3__" : (undefined) 00:03:23.978 Fetching value of define "__znver4__" : (undefined) 00:03:23.978 Library asan found: YES 00:03:23.978 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:23.978 Message: lib/log: Defining dependency "log" 00:03:23.978 Message: lib/kvargs: Defining dependency "kvargs" 00:03:23.978 Message: lib/telemetry: Defining dependency "telemetry" 00:03:23.978 Library rt found: YES 00:03:23.978 Checking for function "getentropy" : NO 00:03:23.978 Message: lib/eal: Defining dependency "eal" 00:03:23.978 Message: lib/ring: Defining dependency "ring" 00:03:23.978 Message: lib/rcu: Defining dependency "rcu" 00:03:23.978 Message: lib/mempool: Defining dependency "mempool" 00:03:23.978 Message: lib/mbuf: Defining dependency "mbuf" 00:03:23.978 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:23.978 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:23.978 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:23.978 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:23.978 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:23.978 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:23.978 Compiler for C supports arguments -mpclmul: YES 00:03:23.978 Compiler for C supports arguments -maes: YES 00:03:23.978 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:23.978 Compiler for C supports arguments -mavx512bw: YES 00:03:23.978 Compiler for C supports arguments -mavx512dq: YES 00:03:23.978 Compiler for C supports arguments -mavx512vl: YES 00:03:23.978 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:23.978 Compiler for C supports arguments -mavx2: YES 00:03:23.978 Compiler for C supports arguments -mavx: YES 00:03:23.978 Message: lib/net: Defining dependency "net" 00:03:23.978 Message: lib/meter: Defining dependency "meter" 00:03:23.978 Message: lib/ethdev: Defining dependency "ethdev" 00:03:23.978 Message: lib/pci: Defining dependency "pci" 00:03:23.978 Message: lib/cmdline: Defining dependency "cmdline" 00:03:23.978 Message: lib/hash: Defining dependency "hash" 00:03:23.978 Message: lib/timer: Defining dependency "timer" 00:03:23.978 Message: lib/compressdev: Defining dependency "compressdev" 00:03:23.978 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:23.978 Message: lib/dmadev: Defining dependency "dmadev" 00:03:23.978 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:23.978 Message: lib/power: Defining dependency "power" 00:03:23.978 Message: lib/reorder: Defining dependency "reorder" 00:03:23.978 Message: lib/security: Defining dependency "security" 00:03:23.978 Has header "linux/userfaultfd.h" : YES 00:03:23.978 Has header "linux/vduse.h" : YES 00:03:23.978 Message: lib/vhost: Defining dependency "vhost" 00:03:23.978 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:23.978 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:23.978 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:23.978 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:23.978 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:23.978 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:23.978 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:23.978 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:23.978 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:23.978 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:23.978 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:23.978 Configuring doxy-api-html.conf using configuration 00:03:23.978 Configuring doxy-api-man.conf using configuration 00:03:23.978 Program mandb found: YES (/usr/bin/mandb) 00:03:23.978 Program sphinx-build found: NO 00:03:23.978 Configuring rte_build_config.h using configuration 00:03:23.978 Message: 00:03:23.978 ================= 00:03:23.978 Applications Enabled 00:03:23.978 ================= 00:03:23.978 00:03:23.978 apps: 00:03:23.978 00:03:23.978 00:03:23.978 Message: 00:03:23.978 ================= 00:03:23.978 Libraries Enabled 00:03:23.978 ================= 00:03:23.978 00:03:23.978 libs: 00:03:23.978 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:23.978 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:23.978 cryptodev, dmadev, power, reorder, security, vhost, 00:03:23.978 00:03:23.978 Message: 00:03:23.978 =============== 00:03:23.978 Drivers Enabled 00:03:23.978 =============== 00:03:23.978 00:03:23.978 common: 00:03:23.978 00:03:23.978 bus: 00:03:23.978 pci, vdev, 00:03:23.978 mempool: 00:03:23.978 ring, 00:03:23.978 dma: 00:03:23.978 00:03:23.978 net: 00:03:23.978 00:03:23.978 crypto: 00:03:23.978 00:03:23.978 compress: 00:03:23.978 00:03:23.978 vdpa: 00:03:23.978 00:03:23.978 00:03:23.978 Message: 00:03:23.978 ================= 00:03:23.978 Content Skipped 00:03:23.978 ================= 00:03:23.978 00:03:23.978 apps: 00:03:23.978 dumpcap: explicitly disabled via build config 00:03:23.978 graph: explicitly disabled via build config 00:03:23.978 pdump: explicitly disabled via build config 00:03:23.978 proc-info: explicitly disabled via build config 00:03:23.978 test-acl: explicitly disabled via build config 00:03:23.978 test-bbdev: explicitly disabled via build config 00:03:23.978 test-cmdline: explicitly disabled via build config 00:03:23.978 test-compress-perf: explicitly disabled via build config 00:03:23.978 test-crypto-perf: explicitly disabled via build config 00:03:23.978 test-dma-perf: explicitly disabled via build config 00:03:23.978 test-eventdev: explicitly disabled via build config 00:03:23.978 test-fib: explicitly disabled via build config 00:03:23.978 test-flow-perf: explicitly disabled via build config 00:03:23.978 test-gpudev: explicitly disabled via build config 00:03:23.978 test-mldev: explicitly disabled via build config 00:03:23.979 test-pipeline: explicitly disabled via build config 00:03:23.979 test-pmd: explicitly disabled via build config 00:03:23.979 test-regex: explicitly disabled via build config 00:03:23.979 test-sad: explicitly disabled via build config 00:03:23.979 test-security-perf: explicitly disabled via build config 00:03:23.979 00:03:23.979 libs: 00:03:23.979 argparse: explicitly disabled via build config 00:03:23.979 metrics: explicitly disabled via build config 00:03:23.979 acl: explicitly disabled via build config 00:03:23.979 bbdev: explicitly disabled via build config 00:03:23.979 bitratestats: explicitly disabled via build config 00:03:23.979 bpf: explicitly disabled via build config 00:03:23.979 cfgfile: explicitly disabled via build config 00:03:23.979 distributor: explicitly disabled via build config 00:03:23.979 efd: explicitly disabled via build config 00:03:23.979 eventdev: explicitly disabled via build config 00:03:23.979 dispatcher: explicitly disabled via build config 00:03:23.979 gpudev: explicitly disabled via build config 00:03:23.979 gro: explicitly disabled via build config 00:03:23.979 gso: explicitly disabled via build config 00:03:23.979 ip_frag: explicitly disabled via build config 00:03:23.979 jobstats: explicitly disabled via build config 00:03:23.979 latencystats: explicitly disabled via build config 00:03:23.979 lpm: explicitly disabled via build config 00:03:23.979 member: explicitly disabled via build config 00:03:23.979 pcapng: explicitly disabled via build config 00:03:23.979 rawdev: explicitly disabled via build config 00:03:23.979 regexdev: explicitly disabled via build config 00:03:23.979 mldev: explicitly disabled via build config 00:03:23.979 rib: explicitly disabled via build config 00:03:23.979 sched: explicitly disabled via build config 00:03:23.979 stack: explicitly disabled via build config 00:03:23.979 ipsec: explicitly disabled via build config 00:03:23.979 pdcp: explicitly disabled via build config 00:03:23.979 fib: explicitly disabled via build config 00:03:23.979 port: explicitly disabled via build config 00:03:23.979 pdump: explicitly disabled via build config 00:03:23.979 table: explicitly disabled via build config 00:03:23.979 pipeline: explicitly disabled via build config 00:03:23.979 graph: explicitly disabled via build config 00:03:23.979 node: explicitly disabled via build config 00:03:23.979 00:03:23.979 drivers: 00:03:23.979 common/cpt: not in enabled drivers build config 00:03:23.979 common/dpaax: not in enabled drivers build config 00:03:23.979 common/iavf: not in enabled drivers build config 00:03:23.979 common/idpf: not in enabled drivers build config 00:03:23.979 common/ionic: not in enabled drivers build config 00:03:23.979 common/mvep: not in enabled drivers build config 00:03:23.979 common/octeontx: not in enabled drivers build config 00:03:23.979 bus/auxiliary: not in enabled drivers build config 00:03:23.979 bus/cdx: not in enabled drivers build config 00:03:23.979 bus/dpaa: not in enabled drivers build config 00:03:23.979 bus/fslmc: not in enabled drivers build config 00:03:23.979 bus/ifpga: not in enabled drivers build config 00:03:23.979 bus/platform: not in enabled drivers build config 00:03:23.979 bus/uacce: not in enabled drivers build config 00:03:23.979 bus/vmbus: not in enabled drivers build config 00:03:23.979 common/cnxk: not in enabled drivers build config 00:03:23.979 common/mlx5: not in enabled drivers build config 00:03:23.979 common/nfp: not in enabled drivers build config 00:03:23.979 common/nitrox: not in enabled drivers build config 00:03:23.979 common/qat: not in enabled drivers build config 00:03:23.979 common/sfc_efx: not in enabled drivers build config 00:03:23.979 mempool/bucket: not in enabled drivers build config 00:03:23.979 mempool/cnxk: not in enabled drivers build config 00:03:23.979 mempool/dpaa: not in enabled drivers build config 00:03:23.979 mempool/dpaa2: not in enabled drivers build config 00:03:23.979 mempool/octeontx: not in enabled drivers build config 00:03:23.979 mempool/stack: not in enabled drivers build config 00:03:23.979 dma/cnxk: not in enabled drivers build config 00:03:23.979 dma/dpaa: not in enabled drivers build config 00:03:23.979 dma/dpaa2: not in enabled drivers build config 00:03:23.979 dma/hisilicon: not in enabled drivers build config 00:03:23.979 dma/idxd: not in enabled drivers build config 00:03:23.979 dma/ioat: not in enabled drivers build config 00:03:23.979 dma/skeleton: not in enabled drivers build config 00:03:23.979 net/af_packet: not in enabled drivers build config 00:03:23.979 net/af_xdp: not in enabled drivers build config 00:03:23.979 net/ark: not in enabled drivers build config 00:03:23.979 net/atlantic: not in enabled drivers build config 00:03:23.979 net/avp: not in enabled drivers build config 00:03:23.979 net/axgbe: not in enabled drivers build config 00:03:23.979 net/bnx2x: not in enabled drivers build config 00:03:23.979 net/bnxt: not in enabled drivers build config 00:03:23.979 net/bonding: not in enabled drivers build config 00:03:23.979 net/cnxk: not in enabled drivers build config 00:03:23.979 net/cpfl: not in enabled drivers build config 00:03:23.979 net/cxgbe: not in enabled drivers build config 00:03:23.979 net/dpaa: not in enabled drivers build config 00:03:23.979 net/dpaa2: not in enabled drivers build config 00:03:23.979 net/e1000: not in enabled drivers build config 00:03:23.979 net/ena: not in enabled drivers build config 00:03:23.979 net/enetc: not in enabled drivers build config 00:03:23.979 net/enetfec: not in enabled drivers build config 00:03:23.979 net/enic: not in enabled drivers build config 00:03:23.979 net/failsafe: not in enabled drivers build config 00:03:23.979 net/fm10k: not in enabled drivers build config 00:03:23.979 net/gve: not in enabled drivers build config 00:03:23.979 net/hinic: not in enabled drivers build config 00:03:23.979 net/hns3: not in enabled drivers build config 00:03:23.979 net/i40e: not in enabled drivers build config 00:03:23.979 net/iavf: not in enabled drivers build config 00:03:23.979 net/ice: not in enabled drivers build config 00:03:23.979 net/idpf: not in enabled drivers build config 00:03:23.979 net/igc: not in enabled drivers build config 00:03:23.979 net/ionic: not in enabled drivers build config 00:03:23.979 net/ipn3ke: not in enabled drivers build config 00:03:23.979 net/ixgbe: not in enabled drivers build config 00:03:23.979 net/mana: not in enabled drivers build config 00:03:23.979 net/memif: not in enabled drivers build config 00:03:23.979 net/mlx4: not in enabled drivers build config 00:03:23.979 net/mlx5: not in enabled drivers build config 00:03:23.979 net/mvneta: not in enabled drivers build config 00:03:23.979 net/mvpp2: not in enabled drivers build config 00:03:23.979 net/netvsc: not in enabled drivers build config 00:03:23.979 net/nfb: not in enabled drivers build config 00:03:23.979 net/nfp: not in enabled drivers build config 00:03:23.979 net/ngbe: not in enabled drivers build config 00:03:23.979 net/null: not in enabled drivers build config 00:03:23.979 net/octeontx: not in enabled drivers build config 00:03:23.979 net/octeon_ep: not in enabled drivers build config 00:03:23.979 net/pcap: not in enabled drivers build config 00:03:23.979 net/pfe: not in enabled drivers build config 00:03:23.979 net/qede: not in enabled drivers build config 00:03:23.979 net/ring: not in enabled drivers build config 00:03:23.979 net/sfc: not in enabled drivers build config 00:03:23.979 net/softnic: not in enabled drivers build config 00:03:23.979 net/tap: not in enabled drivers build config 00:03:23.979 net/thunderx: not in enabled drivers build config 00:03:23.979 net/txgbe: not in enabled drivers build config 00:03:23.979 net/vdev_netvsc: not in enabled drivers build config 00:03:23.979 net/vhost: not in enabled drivers build config 00:03:23.979 net/virtio: not in enabled drivers build config 00:03:23.979 net/vmxnet3: not in enabled drivers build config 00:03:23.979 raw/*: missing internal dependency, "rawdev" 00:03:23.979 crypto/armv8: not in enabled drivers build config 00:03:23.979 crypto/bcmfs: not in enabled drivers build config 00:03:23.979 crypto/caam_jr: not in enabled drivers build config 00:03:23.979 crypto/ccp: not in enabled drivers build config 00:03:23.979 crypto/cnxk: not in enabled drivers build config 00:03:23.979 crypto/dpaa_sec: not in enabled drivers build config 00:03:23.979 crypto/dpaa2_sec: not in enabled drivers build config 00:03:23.979 crypto/ipsec_mb: not in enabled drivers build config 00:03:23.979 crypto/mlx5: not in enabled drivers build config 00:03:23.979 crypto/mvsam: not in enabled drivers build config 00:03:23.979 crypto/nitrox: not in enabled drivers build config 00:03:23.979 crypto/null: not in enabled drivers build config 00:03:23.979 crypto/octeontx: not in enabled drivers build config 00:03:23.979 crypto/openssl: not in enabled drivers build config 00:03:23.979 crypto/scheduler: not in enabled drivers build config 00:03:23.979 crypto/uadk: not in enabled drivers build config 00:03:23.979 crypto/virtio: not in enabled drivers build config 00:03:23.979 compress/isal: not in enabled drivers build config 00:03:23.979 compress/mlx5: not in enabled drivers build config 00:03:23.979 compress/nitrox: not in enabled drivers build config 00:03:23.979 compress/octeontx: not in enabled drivers build config 00:03:23.979 compress/zlib: not in enabled drivers build config 00:03:23.979 regex/*: missing internal dependency, "regexdev" 00:03:23.979 ml/*: missing internal dependency, "mldev" 00:03:23.979 vdpa/ifc: not in enabled drivers build config 00:03:23.979 vdpa/mlx5: not in enabled drivers build config 00:03:23.979 vdpa/nfp: not in enabled drivers build config 00:03:23.979 vdpa/sfc: not in enabled drivers build config 00:03:23.979 event/*: missing internal dependency, "eventdev" 00:03:23.979 baseband/*: missing internal dependency, "bbdev" 00:03:23.979 gpu/*: missing internal dependency, "gpudev" 00:03:23.979 00:03:23.979 00:03:24.239 Build targets in project: 85 00:03:24.239 00:03:24.239 DPDK 24.03.0 00:03:24.239 00:03:24.239 User defined options 00:03:24.239 buildtype : debug 00:03:24.239 default_library : shared 00:03:24.239 libdir : lib 00:03:24.239 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:24.239 b_sanitize : address 00:03:24.239 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:24.239 c_link_args : 00:03:24.239 cpu_instruction_set: native 00:03:24.239 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:24.239 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:24.239 enable_docs : false 00:03:24.239 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:24.239 enable_kmods : false 00:03:24.239 max_lcores : 128 00:03:24.239 tests : false 00:03:24.239 00:03:24.239 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:24.807 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:24.808 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:24.808 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:24.808 [3/268] Linking static target lib/librte_kvargs.a 00:03:24.808 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:24.808 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:24.808 [6/268] Linking static target lib/librte_log.a 00:03:25.376 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.376 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:25.376 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:25.376 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:25.376 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:25.376 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:25.376 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:25.376 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:25.376 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:25.376 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:25.376 [17/268] Linking static target lib/librte_telemetry.a 00:03:25.635 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:25.894 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:25.894 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:25.894 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.894 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:25.894 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:25.894 [24/268] Linking target lib/librte_log.so.24.1 00:03:25.894 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:26.153 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:26.153 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:26.153 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:26.153 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:26.153 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:26.153 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:26.153 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:26.412 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.412 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:26.412 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:26.671 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:26.671 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:26.671 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:26.671 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:26.671 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:26.671 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:26.671 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:26.671 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:26.671 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:26.931 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:26.931 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:26.931 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:26.931 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:26.931 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:26.931 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:27.214 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:27.214 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:27.489 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:27.490 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:27.490 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:27.490 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:27.490 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:27.490 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:27.490 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:27.490 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:27.749 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:27.749 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:27.749 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:27.749 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:28.007 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:28.007 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:28.007 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:28.007 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:28.266 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:28.266 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:28.266 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:28.266 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:28.266 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:28.525 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:28.525 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:28.525 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:28.525 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:28.525 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:28.784 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:28.784 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:28.784 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:28.784 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:28.784 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:28.784 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:28.784 [85/268] Linking static target lib/librte_eal.a 00:03:29.044 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:29.044 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:29.044 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:29.044 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:29.044 [90/268] Linking static target lib/librte_ring.a 00:03:29.044 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:29.044 [92/268] Linking static target lib/librte_rcu.a 00:03:29.304 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:29.304 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:29.304 [95/268] Linking static target lib/librte_mempool.a 00:03:29.304 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:29.564 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.564 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:29.564 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:29.564 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.824 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:29.824 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:29.824 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:29.824 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:29.824 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:30.083 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:30.083 [107/268] Linking static target lib/librte_mbuf.a 00:03:30.083 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:30.083 [109/268] Linking static target lib/librte_meter.a 00:03:30.083 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:30.083 [111/268] Linking static target lib/librte_net.a 00:03:30.342 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:30.342 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:30.342 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:30.342 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.342 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.342 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.342 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:30.909 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:30.909 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:30.909 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.169 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:31.169 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:31.428 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:31.428 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:31.428 [126/268] Linking static target lib/librte_pci.a 00:03:31.428 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:31.688 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:31.688 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:31.688 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:31.688 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:31.688 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:31.688 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:31.688 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:31.688 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:31.948 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.948 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:31.948 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:31.948 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:31.948 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:31.948 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:31.948 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:31.948 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:31.948 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:32.207 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:32.207 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:32.207 [147/268] Linking static target lib/librte_cmdline.a 00:03:32.207 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:32.468 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:32.468 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.468 [151/268] Linking static target lib/librte_timer.a 00:03:32.468 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.468 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:32.728 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:32.728 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:32.728 [156/268] Linking static target lib/librte_ethdev.a 00:03:32.728 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:32.988 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.988 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:32.988 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.988 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:32.988 [162/268] Linking static target lib/librte_hash.a 00:03:33.247 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:33.247 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:33.247 [165/268] Linking static target lib/librte_compressdev.a 00:03:33.247 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:33.247 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:33.506 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:33.506 [169/268] Linking static target lib/librte_dmadev.a 00:03:33.506 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:33.506 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:33.506 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:33.765 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:34.022 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.022 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:34.022 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:34.280 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.280 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:34.280 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:34.280 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:34.280 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:34.280 [182/268] Linking static target lib/librte_cryptodev.a 00:03:34.280 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.539 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.539 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:34.539 [186/268] Linking static target lib/librte_power.a 00:03:34.798 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:34.798 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:34.798 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:34.798 [190/268] Linking static target lib/librte_reorder.a 00:03:34.798 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:35.056 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:35.056 [193/268] Linking static target lib/librte_security.a 00:03:35.315 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:35.315 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.883 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:35.883 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.883 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.884 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:35.884 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:35.884 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:36.141 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:36.141 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:36.399 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:36.399 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:36.399 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:36.399 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:36.658 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:36.658 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:36.658 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:36.917 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:36.917 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:36.917 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.917 [214/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.917 [215/268] Linking static target drivers/librte_bus_pci.a 00:03:36.917 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:36.917 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.917 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:36.917 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:36.917 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:36.917 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:37.177 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:37.177 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:37.177 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:37.177 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:37.177 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.436 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.373 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:41.666 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:41.666 [230/268] Linking static target lib/librte_vhost.a 00:03:41.666 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.925 [232/268] Linking target lib/librte_eal.so.24.1 00:03:41.925 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:41.925 [234/268] Linking target lib/librte_meter.so.24.1 00:03:41.925 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:41.925 [236/268] Linking target lib/librte_pci.so.24.1 00:03:41.925 [237/268] Linking target lib/librte_timer.so.24.1 00:03:41.925 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:41.925 [239/268] Linking target lib/librte_ring.so.24.1 00:03:42.185 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:42.185 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:42.185 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:42.185 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:42.185 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:42.185 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:42.185 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:42.185 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:42.444 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:42.444 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:42.444 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.444 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:42.444 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:42.444 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:42.704 [254/268] Linking target lib/librte_net.so.24.1 00:03:42.704 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:42.704 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:42.704 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:42.704 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:42.704 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:42.704 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:42.704 [261/268] Linking target lib/librte_security.so.24.1 00:03:42.704 [262/268] Linking target lib/librte_hash.so.24.1 00:03:42.704 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:42.963 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:42.963 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:42.964 [266/268] Linking target lib/librte_power.so.24.1 00:03:43.533 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.533 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:43.533 INFO: autodetecting backend as ninja 00:03:43.533 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:01.738 CC lib/log/log.o 00:04:01.738 CC lib/log/log_flags.o 00:04:01.738 CC lib/log/log_deprecated.o 00:04:01.738 CC lib/ut_mock/mock.o 00:04:01.738 CC lib/ut/ut.o 00:04:01.738 LIB libspdk_ut_mock.a 00:04:01.738 LIB libspdk_log.a 00:04:01.738 SO libspdk_ut_mock.so.6.0 00:04:01.738 LIB libspdk_ut.a 00:04:01.738 SO libspdk_log.so.7.1 00:04:01.738 SO libspdk_ut.so.2.0 00:04:01.738 SYMLINK libspdk_ut_mock.so 00:04:01.738 SYMLINK libspdk_log.so 00:04:01.738 SYMLINK libspdk_ut.so 00:04:01.738 CC lib/dma/dma.o 00:04:01.738 CC lib/util/base64.o 00:04:01.738 CC lib/util/cpuset.o 00:04:01.738 CC lib/util/bit_array.o 00:04:01.738 CC lib/util/crc16.o 00:04:01.738 CC lib/ioat/ioat.o 00:04:01.738 CC lib/util/crc32.o 00:04:01.738 CC lib/util/crc32c.o 00:04:01.738 CXX lib/trace_parser/trace.o 00:04:01.738 CC lib/vfio_user/host/vfio_user_pci.o 00:04:01.738 CC lib/util/crc32_ieee.o 00:04:01.738 CC lib/vfio_user/host/vfio_user.o 00:04:01.738 CC lib/util/crc64.o 00:04:01.738 CC lib/util/dif.o 00:04:01.738 LIB libspdk_dma.a 00:04:01.738 CC lib/util/fd.o 00:04:01.738 SO libspdk_dma.so.5.0 00:04:01.738 CC lib/util/fd_group.o 00:04:01.738 CC lib/util/file.o 00:04:01.738 CC lib/util/hexlify.o 00:04:01.738 SYMLINK libspdk_dma.so 00:04:01.738 CC lib/util/iov.o 00:04:01.738 LIB libspdk_ioat.a 00:04:01.738 SO libspdk_ioat.so.7.0 00:04:01.738 CC lib/util/math.o 00:04:01.738 CC lib/util/net.o 00:04:01.738 SYMLINK libspdk_ioat.so 00:04:01.738 CC lib/util/pipe.o 00:04:01.738 LIB libspdk_vfio_user.a 00:04:01.739 CC lib/util/strerror_tls.o 00:04:01.739 CC lib/util/string.o 00:04:01.739 SO libspdk_vfio_user.so.5.0 00:04:01.739 CC lib/util/uuid.o 00:04:01.739 SYMLINK libspdk_vfio_user.so 00:04:01.739 CC lib/util/xor.o 00:04:01.739 CC lib/util/zipf.o 00:04:01.739 CC lib/util/md5.o 00:04:01.739 LIB libspdk_util.a 00:04:01.739 LIB libspdk_trace_parser.a 00:04:01.739 SO libspdk_util.so.10.1 00:04:01.739 SO libspdk_trace_parser.so.6.0 00:04:01.997 SYMLINK libspdk_util.so 00:04:01.997 SYMLINK libspdk_trace_parser.so 00:04:02.256 CC lib/rdma_utils/rdma_utils.o 00:04:02.256 CC lib/conf/conf.o 00:04:02.256 CC lib/idxd/idxd.o 00:04:02.256 CC lib/idxd/idxd_user.o 00:04:02.256 CC lib/vmd/vmd.o 00:04:02.256 CC lib/idxd/idxd_kernel.o 00:04:02.256 CC lib/vmd/led.o 00:04:02.256 CC lib/json/json_parse.o 00:04:02.256 CC lib/json/json_util.o 00:04:02.256 CC lib/env_dpdk/env.o 00:04:02.256 CC lib/env_dpdk/memory.o 00:04:02.256 CC lib/json/json_write.o 00:04:02.515 LIB libspdk_conf.a 00:04:02.515 CC lib/env_dpdk/pci.o 00:04:02.515 LIB libspdk_rdma_utils.a 00:04:02.515 CC lib/env_dpdk/init.o 00:04:02.515 CC lib/env_dpdk/threads.o 00:04:02.515 SO libspdk_conf.so.6.0 00:04:02.515 SO libspdk_rdma_utils.so.1.0 00:04:02.515 SYMLINK libspdk_conf.so 00:04:02.515 CC lib/env_dpdk/pci_ioat.o 00:04:02.515 SYMLINK libspdk_rdma_utils.so 00:04:02.515 CC lib/env_dpdk/pci_virtio.o 00:04:02.515 CC lib/env_dpdk/pci_vmd.o 00:04:02.515 CC lib/env_dpdk/pci_idxd.o 00:04:02.515 CC lib/env_dpdk/pci_event.o 00:04:02.515 LIB libspdk_json.a 00:04:02.774 SO libspdk_json.so.6.0 00:04:02.774 CC lib/env_dpdk/sigbus_handler.o 00:04:02.774 SYMLINK libspdk_json.so 00:04:02.774 CC lib/env_dpdk/pci_dpdk.o 00:04:02.774 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.774 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:02.774 LIB libspdk_idxd.a 00:04:02.774 SO libspdk_idxd.so.12.1 00:04:02.774 LIB libspdk_vmd.a 00:04:02.774 CC lib/rdma_provider/common.o 00:04:02.774 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:03.034 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.034 SO libspdk_vmd.so.6.0 00:04:03.034 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.034 SYMLINK libspdk_idxd.so 00:04:03.034 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.034 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.034 SYMLINK libspdk_vmd.so 00:04:03.034 LIB libspdk_rdma_provider.a 00:04:03.293 SO libspdk_rdma_provider.so.7.0 00:04:03.293 LIB libspdk_jsonrpc.a 00:04:03.293 SYMLINK libspdk_rdma_provider.so 00:04:03.293 SO libspdk_jsonrpc.so.6.0 00:04:03.293 SYMLINK libspdk_jsonrpc.so 00:04:03.862 CC lib/rpc/rpc.o 00:04:03.862 LIB libspdk_env_dpdk.a 00:04:03.862 SO libspdk_env_dpdk.so.15.1 00:04:04.121 LIB libspdk_rpc.a 00:04:04.121 SYMLINK libspdk_env_dpdk.so 00:04:04.121 SO libspdk_rpc.so.6.0 00:04:04.121 SYMLINK libspdk_rpc.so 00:04:04.380 CC lib/trace/trace.o 00:04:04.380 CC lib/trace/trace_rpc.o 00:04:04.380 CC lib/trace/trace_flags.o 00:04:04.380 CC lib/notify/notify_rpc.o 00:04:04.380 CC lib/notify/notify.o 00:04:04.380 CC lib/keyring/keyring.o 00:04:04.381 CC lib/keyring/keyring_rpc.o 00:04:04.640 LIB libspdk_notify.a 00:04:04.640 SO libspdk_notify.so.6.0 00:04:04.640 LIB libspdk_keyring.a 00:04:04.640 LIB libspdk_trace.a 00:04:04.899 SYMLINK libspdk_notify.so 00:04:04.899 SO libspdk_keyring.so.2.0 00:04:04.899 SO libspdk_trace.so.11.0 00:04:04.899 SYMLINK libspdk_keyring.so 00:04:04.899 SYMLINK libspdk_trace.so 00:04:05.580 CC lib/thread/thread.o 00:04:05.580 CC lib/thread/iobuf.o 00:04:05.580 CC lib/sock/sock.o 00:04:05.580 CC lib/sock/sock_rpc.o 00:04:05.868 LIB libspdk_sock.a 00:04:05.868 SO libspdk_sock.so.10.0 00:04:05.868 SYMLINK libspdk_sock.so 00:04:06.448 CC lib/nvme/nvme_ctrlr.o 00:04:06.448 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.448 CC lib/nvme/nvme_fabric.o 00:04:06.448 CC lib/nvme/nvme_ns_cmd.o 00:04:06.448 CC lib/nvme/nvme_ns.o 00:04:06.448 CC lib/nvme/nvme_pcie_common.o 00:04:06.448 CC lib/nvme/nvme_pcie.o 00:04:06.448 CC lib/nvme/nvme.o 00:04:06.448 CC lib/nvme/nvme_qpair.o 00:04:07.016 LIB libspdk_thread.a 00:04:07.016 SO libspdk_thread.so.11.0 00:04:07.016 CC lib/nvme/nvme_quirks.o 00:04:07.016 CC lib/nvme/nvme_transport.o 00:04:07.016 SYMLINK libspdk_thread.so 00:04:07.016 CC lib/nvme/nvme_discovery.o 00:04:07.016 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:07.016 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:07.275 CC lib/nvme/nvme_tcp.o 00:04:07.275 CC lib/nvme/nvme_opal.o 00:04:07.275 CC lib/nvme/nvme_io_msg.o 00:04:07.275 CC lib/nvme/nvme_poll_group.o 00:04:07.534 CC lib/nvme/nvme_zns.o 00:04:07.534 CC lib/nvme/nvme_stubs.o 00:04:07.534 CC lib/nvme/nvme_auth.o 00:04:07.534 CC lib/nvme/nvme_cuse.o 00:04:07.793 CC lib/nvme/nvme_rdma.o 00:04:07.793 CC lib/accel/accel.o 00:04:08.052 CC lib/accel/accel_rpc.o 00:04:08.052 CC lib/blob/blobstore.o 00:04:08.052 CC lib/init/json_config.o 00:04:08.052 CC lib/virtio/virtio.o 00:04:08.052 CC lib/init/subsystem.o 00:04:08.311 CC lib/accel/accel_sw.o 00:04:08.311 CC lib/init/subsystem_rpc.o 00:04:08.311 CC lib/init/rpc.o 00:04:08.571 CC lib/virtio/virtio_vhost_user.o 00:04:08.571 CC lib/blob/request.o 00:04:08.571 CC lib/blob/zeroes.o 00:04:08.571 LIB libspdk_init.a 00:04:08.571 CC lib/fsdev/fsdev.o 00:04:08.571 SO libspdk_init.so.6.0 00:04:08.571 CC lib/fsdev/fsdev_io.o 00:04:08.571 CC lib/blob/blob_bs_dev.o 00:04:08.830 SYMLINK libspdk_init.so 00:04:08.830 CC lib/fsdev/fsdev_rpc.o 00:04:08.830 CC lib/virtio/virtio_vfio_user.o 00:04:08.830 CC lib/virtio/virtio_pci.o 00:04:08.830 CC lib/event/app.o 00:04:08.830 CC lib/event/log_rpc.o 00:04:08.830 CC lib/event/reactor.o 00:04:09.088 CC lib/event/app_rpc.o 00:04:09.088 CC lib/event/scheduler_static.o 00:04:09.088 LIB libspdk_accel.a 00:04:09.088 LIB libspdk_nvme.a 00:04:09.088 SO libspdk_accel.so.16.0 00:04:09.088 LIB libspdk_virtio.a 00:04:09.088 SYMLINK libspdk_accel.so 00:04:09.088 SO libspdk_virtio.so.7.0 00:04:09.348 SYMLINK libspdk_virtio.so 00:04:09.348 SO libspdk_nvme.so.15.0 00:04:09.348 LIB libspdk_fsdev.a 00:04:09.348 SO libspdk_fsdev.so.2.0 00:04:09.348 LIB libspdk_event.a 00:04:09.609 SYMLINK libspdk_fsdev.so 00:04:09.609 SO libspdk_event.so.14.0 00:04:09.609 CC lib/bdev/bdev.o 00:04:09.609 CC lib/bdev/bdev_rpc.o 00:04:09.609 CC lib/bdev/part.o 00:04:09.609 CC lib/bdev/bdev_zone.o 00:04:09.609 CC lib/bdev/scsi_nvme.o 00:04:09.609 SYMLINK libspdk_event.so 00:04:09.609 SYMLINK libspdk_nvme.so 00:04:09.609 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:10.546 LIB libspdk_fuse_dispatcher.a 00:04:10.546 SO libspdk_fuse_dispatcher.so.1.0 00:04:10.546 SYMLINK libspdk_fuse_dispatcher.so 00:04:11.484 LIB libspdk_blob.a 00:04:11.742 SO libspdk_blob.so.12.0 00:04:11.742 SYMLINK libspdk_blob.so 00:04:12.310 CC lib/blobfs/blobfs.o 00:04:12.310 CC lib/blobfs/tree.o 00:04:12.310 CC lib/lvol/lvol.o 00:04:12.570 LIB libspdk_bdev.a 00:04:12.828 SO libspdk_bdev.so.17.0 00:04:12.828 SYMLINK libspdk_bdev.so 00:04:13.087 LIB libspdk_blobfs.a 00:04:13.087 CC lib/ftl/ftl_core.o 00:04:13.087 CC lib/ftl/ftl_debug.o 00:04:13.087 CC lib/ftl/ftl_init.o 00:04:13.087 CC lib/ftl/ftl_layout.o 00:04:13.087 CC lib/scsi/dev.o 00:04:13.087 CC lib/nvmf/ctrlr.o 00:04:13.087 CC lib/nbd/nbd.o 00:04:13.087 SO libspdk_blobfs.so.11.0 00:04:13.087 CC lib/ublk/ublk.o 00:04:13.087 LIB libspdk_lvol.a 00:04:13.087 SYMLINK libspdk_blobfs.so 00:04:13.087 CC lib/nvmf/ctrlr_discovery.o 00:04:13.347 SO libspdk_lvol.so.11.0 00:04:13.347 CC lib/ftl/ftl_io.o 00:04:13.347 SYMLINK libspdk_lvol.so 00:04:13.347 CC lib/ftl/ftl_sb.o 00:04:13.347 CC lib/ftl/ftl_l2p.o 00:04:13.347 CC lib/scsi/lun.o 00:04:13.347 CC lib/nbd/nbd_rpc.o 00:04:13.606 CC lib/ftl/ftl_l2p_flat.o 00:04:13.606 CC lib/ftl/ftl_nv_cache.o 00:04:13.606 CC lib/ftl/ftl_band.o 00:04:13.606 CC lib/ftl/ftl_band_ops.o 00:04:13.606 CC lib/ftl/ftl_writer.o 00:04:13.606 LIB libspdk_nbd.a 00:04:13.606 SO libspdk_nbd.so.7.0 00:04:13.606 SYMLINK libspdk_nbd.so 00:04:13.606 CC lib/ftl/ftl_rq.o 00:04:13.606 CC lib/ftl/ftl_reloc.o 00:04:13.606 CC lib/scsi/port.o 00:04:13.606 CC lib/ftl/ftl_l2p_cache.o 00:04:13.864 CC lib/ftl/ftl_p2l.o 00:04:13.864 CC lib/scsi/scsi.o 00:04:13.864 CC lib/ublk/ublk_rpc.o 00:04:13.864 CC lib/scsi/scsi_bdev.o 00:04:13.864 CC lib/scsi/scsi_pr.o 00:04:13.864 CC lib/scsi/scsi_rpc.o 00:04:14.123 CC lib/scsi/task.o 00:04:14.123 CC lib/ftl/ftl_p2l_log.o 00:04:14.123 LIB libspdk_ublk.a 00:04:14.123 CC lib/ftl/mngt/ftl_mngt.o 00:04:14.123 SO libspdk_ublk.so.3.0 00:04:14.123 SYMLINK libspdk_ublk.so 00:04:14.123 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:14.123 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:14.381 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:14.381 CC lib/nvmf/ctrlr_bdev.o 00:04:14.381 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:14.381 CC lib/nvmf/subsystem.o 00:04:14.381 CC lib/nvmf/nvmf.o 00:04:14.381 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:14.381 CC lib/nvmf/nvmf_rpc.o 00:04:14.381 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:14.381 LIB libspdk_scsi.a 00:04:14.381 SO libspdk_scsi.so.9.0 00:04:14.640 CC lib/nvmf/transport.o 00:04:14.640 SYMLINK libspdk_scsi.so 00:04:14.640 CC lib/nvmf/tcp.o 00:04:14.640 CC lib/nvmf/stubs.o 00:04:14.640 CC lib/nvmf/mdns_server.o 00:04:14.640 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.899 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:15.157 CC lib/nvmf/rdma.o 00:04:15.158 CC lib/nvmf/auth.o 00:04:15.158 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:15.158 CC lib/iscsi/conn.o 00:04:15.158 CC lib/vhost/vhost.o 00:04:15.417 CC lib/vhost/vhost_rpc.o 00:04:15.417 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:15.417 CC lib/iscsi/init_grp.o 00:04:15.417 CC lib/iscsi/iscsi.o 00:04:15.417 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:15.676 CC lib/iscsi/param.o 00:04:15.676 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.935 CC lib/ftl/utils/ftl_conf.o 00:04:15.935 CC lib/ftl/utils/ftl_md.o 00:04:15.935 CC lib/iscsi/portal_grp.o 00:04:15.935 CC lib/iscsi/tgt_node.o 00:04:15.935 CC lib/vhost/vhost_scsi.o 00:04:15.935 CC lib/vhost/vhost_blk.o 00:04:15.935 CC lib/ftl/utils/ftl_mempool.o 00:04:16.195 CC lib/ftl/utils/ftl_bitmap.o 00:04:16.195 CC lib/vhost/rte_vhost_user.o 00:04:16.195 CC lib/ftl/utils/ftl_property.o 00:04:16.195 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:16.455 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:16.455 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:16.455 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:16.455 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:16.455 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:16.455 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:16.714 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:16.714 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:16.714 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:16.714 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:16.714 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:16.714 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:16.973 CC lib/iscsi/iscsi_subsystem.o 00:04:16.973 CC lib/iscsi/iscsi_rpc.o 00:04:16.973 CC lib/ftl/base/ftl_base_dev.o 00:04:16.973 CC lib/ftl/base/ftl_base_bdev.o 00:04:16.973 CC lib/ftl/ftl_trace.o 00:04:16.973 CC lib/iscsi/task.o 00:04:17.232 LIB libspdk_vhost.a 00:04:17.232 LIB libspdk_ftl.a 00:04:17.232 SO libspdk_vhost.so.8.0 00:04:17.232 LIB libspdk_iscsi.a 00:04:17.491 LIB libspdk_nvmf.a 00:04:17.491 SYMLINK libspdk_vhost.so 00:04:17.491 SO libspdk_iscsi.so.8.0 00:04:17.491 SO libspdk_ftl.so.9.0 00:04:17.491 SO libspdk_nvmf.so.20.0 00:04:17.751 SYMLINK libspdk_iscsi.so 00:04:17.751 SYMLINK libspdk_ftl.so 00:04:17.751 SYMLINK libspdk_nvmf.so 00:04:18.320 CC module/env_dpdk/env_dpdk_rpc.o 00:04:18.320 CC module/accel/ioat/accel_ioat.o 00:04:18.320 CC module/accel/dsa/accel_dsa.o 00:04:18.320 CC module/accel/error/accel_error.o 00:04:18.320 CC module/sock/posix/posix.o 00:04:18.320 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:18.320 CC module/fsdev/aio/fsdev_aio.o 00:04:18.320 CC module/keyring/linux/keyring.o 00:04:18.320 CC module/keyring/file/keyring.o 00:04:18.320 CC module/blob/bdev/blob_bdev.o 00:04:18.320 LIB libspdk_env_dpdk_rpc.a 00:04:18.320 SO libspdk_env_dpdk_rpc.so.6.0 00:04:18.580 SYMLINK libspdk_env_dpdk_rpc.so 00:04:18.580 CC module/keyring/file/keyring_rpc.o 00:04:18.580 CC module/keyring/linux/keyring_rpc.o 00:04:18.580 CC module/accel/ioat/accel_ioat_rpc.o 00:04:18.580 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:18.580 LIB libspdk_scheduler_dynamic.a 00:04:18.580 CC module/accel/error/accel_error_rpc.o 00:04:18.580 SO libspdk_scheduler_dynamic.so.4.0 00:04:18.580 CC module/accel/dsa/accel_dsa_rpc.o 00:04:18.580 LIB libspdk_keyring_linux.a 00:04:18.580 LIB libspdk_blob_bdev.a 00:04:18.580 LIB libspdk_accel_ioat.a 00:04:18.580 LIB libspdk_keyring_file.a 00:04:18.580 SYMLINK libspdk_scheduler_dynamic.so 00:04:18.580 SO libspdk_blob_bdev.so.12.0 00:04:18.580 SO libspdk_keyring_linux.so.1.0 00:04:18.580 SO libspdk_accel_ioat.so.6.0 00:04:18.580 SO libspdk_keyring_file.so.2.0 00:04:18.580 LIB libspdk_accel_error.a 00:04:18.580 SYMLINK libspdk_keyring_linux.so 00:04:18.580 CC module/fsdev/aio/linux_aio_mgr.o 00:04:18.580 SYMLINK libspdk_keyring_file.so 00:04:18.580 SYMLINK libspdk_blob_bdev.so 00:04:18.839 SO libspdk_accel_error.so.2.0 00:04:18.839 SYMLINK libspdk_accel_ioat.so 00:04:18.839 LIB libspdk_accel_dsa.a 00:04:18.840 SYMLINK libspdk_accel_error.so 00:04:18.840 SO libspdk_accel_dsa.so.5.0 00:04:18.840 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.840 SYMLINK libspdk_accel_dsa.so 00:04:18.840 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.840 CC module/accel/iaa/accel_iaa.o 00:04:18.840 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.125 CC module/bdev/error/vbdev_error.o 00:04:19.125 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.125 CC module/bdev/delay/vbdev_delay.o 00:04:19.125 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:19.125 CC module/bdev/gpt/gpt.o 00:04:19.125 LIB libspdk_scheduler_gscheduler.a 00:04:19.125 LIB libspdk_fsdev_aio.a 00:04:19.125 SO libspdk_scheduler_gscheduler.so.4.0 00:04:19.125 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:19.125 CC module/bdev/lvol/vbdev_lvol.o 00:04:19.125 SO libspdk_fsdev_aio.so.1.0 00:04:19.125 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.125 SYMLINK libspdk_scheduler_gscheduler.so 00:04:19.125 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:19.125 LIB libspdk_sock_posix.a 00:04:19.125 SYMLINK libspdk_fsdev_aio.so 00:04:19.125 CC module/bdev/error/vbdev_error_rpc.o 00:04:19.125 SO libspdk_sock_posix.so.6.0 00:04:19.125 CC module/bdev/gpt/vbdev_gpt.o 00:04:19.125 LIB libspdk_accel_iaa.a 00:04:19.404 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:19.404 SO libspdk_accel_iaa.so.3.0 00:04:19.404 SYMLINK libspdk_sock_posix.so 00:04:19.404 LIB libspdk_blobfs_bdev.a 00:04:19.404 SO libspdk_blobfs_bdev.so.6.0 00:04:19.404 LIB libspdk_bdev_error.a 00:04:19.404 SYMLINK libspdk_accel_iaa.so 00:04:19.404 SO libspdk_bdev_error.so.6.0 00:04:19.404 CC module/bdev/malloc/bdev_malloc.o 00:04:19.404 LIB libspdk_bdev_delay.a 00:04:19.404 SYMLINK libspdk_blobfs_bdev.so 00:04:19.404 SO libspdk_bdev_delay.so.6.0 00:04:19.404 CC module/bdev/null/bdev_null.o 00:04:19.404 SYMLINK libspdk_bdev_error.so 00:04:19.404 CC module/bdev/null/bdev_null_rpc.o 00:04:19.404 CC module/bdev/nvme/bdev_nvme.o 00:04:19.404 LIB libspdk_bdev_gpt.a 00:04:19.404 SYMLINK libspdk_bdev_delay.so 00:04:19.404 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:19.663 SO libspdk_bdev_gpt.so.6.0 00:04:19.663 CC module/bdev/passthru/vbdev_passthru.o 00:04:19.663 SYMLINK libspdk_bdev_gpt.so 00:04:19.663 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:19.663 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:19.663 CC module/bdev/raid/bdev_raid.o 00:04:19.663 CC module/bdev/nvme/nvme_rpc.o 00:04:19.663 LIB libspdk_bdev_lvol.a 00:04:19.663 SO libspdk_bdev_lvol.so.6.0 00:04:19.663 LIB libspdk_bdev_null.a 00:04:19.663 SO libspdk_bdev_null.so.6.0 00:04:19.663 SYMLINK libspdk_bdev_lvol.so 00:04:19.663 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.663 LIB libspdk_bdev_malloc.a 00:04:19.922 SYMLINK libspdk_bdev_null.so 00:04:19.922 SO libspdk_bdev_malloc.so.6.0 00:04:19.922 LIB libspdk_bdev_passthru.a 00:04:19.922 CC module/bdev/nvme/vbdev_opal.o 00:04:19.922 SO libspdk_bdev_passthru.so.6.0 00:04:19.922 SYMLINK libspdk_bdev_malloc.so 00:04:19.922 CC module/bdev/split/vbdev_split.o 00:04:19.922 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:19.922 CC module/bdev/split/vbdev_split_rpc.o 00:04:19.922 SYMLINK libspdk_bdev_passthru.so 00:04:19.922 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.922 CC module/bdev/xnvme/bdev_xnvme.o 00:04:19.922 CC module/bdev/aio/bdev_aio.o 00:04:20.182 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.182 LIB libspdk_bdev_split.a 00:04:20.182 SO libspdk_bdev_split.so.6.0 00:04:20.182 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:20.182 LIB libspdk_bdev_zone_block.a 00:04:20.182 SYMLINK libspdk_bdev_split.so 00:04:20.182 CC module/bdev/raid/bdev_raid_rpc.o 00:04:20.182 CC module/bdev/raid/bdev_raid_sb.o 00:04:20.182 CC module/bdev/ftl/bdev_ftl.o 00:04:20.182 CC module/bdev/iscsi/bdev_iscsi.o 00:04:20.441 SO libspdk_bdev_zone_block.so.6.0 00:04:20.441 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:20.441 SYMLINK libspdk_bdev_zone_block.so 00:04:20.441 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:20.441 LIB libspdk_bdev_aio.a 00:04:20.441 LIB libspdk_bdev_xnvme.a 00:04:20.441 SO libspdk_bdev_xnvme.so.3.0 00:04:20.441 SO libspdk_bdev_aio.so.6.0 00:04:20.441 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.441 SYMLINK libspdk_bdev_xnvme.so 00:04:20.441 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:20.700 SYMLINK libspdk_bdev_aio.so 00:04:20.700 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.700 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.700 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.700 CC module/bdev/raid/raid0.o 00:04:20.700 CC module/bdev/raid/raid1.o 00:04:20.701 CC module/bdev/raid/concat.o 00:04:20.701 LIB libspdk_bdev_iscsi.a 00:04:20.701 SO libspdk_bdev_iscsi.so.6.0 00:04:20.701 LIB libspdk_bdev_ftl.a 00:04:20.959 SYMLINK libspdk_bdev_iscsi.so 00:04:20.959 SO libspdk_bdev_ftl.so.6.0 00:04:20.959 LIB libspdk_bdev_virtio.a 00:04:20.959 SYMLINK libspdk_bdev_ftl.so 00:04:20.959 SO libspdk_bdev_virtio.so.6.0 00:04:20.959 LIB libspdk_bdev_raid.a 00:04:20.959 SYMLINK libspdk_bdev_virtio.so 00:04:21.218 SO libspdk_bdev_raid.so.6.0 00:04:21.218 SYMLINK libspdk_bdev_raid.so 00:04:22.155 LIB libspdk_bdev_nvme.a 00:04:22.414 SO libspdk_bdev_nvme.so.7.1 00:04:22.414 SYMLINK libspdk_bdev_nvme.so 00:04:23.350 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:23.350 CC module/event/subsystems/iobuf/iobuf.o 00:04:23.350 CC module/event/subsystems/keyring/keyring.o 00:04:23.350 CC module/event/subsystems/scheduler/scheduler.o 00:04:23.350 CC module/event/subsystems/vmd/vmd.o 00:04:23.351 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:23.351 CC module/event/subsystems/sock/sock.o 00:04:23.351 CC module/event/subsystems/fsdev/fsdev.o 00:04:23.351 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.351 LIB libspdk_event_scheduler.a 00:04:23.351 LIB libspdk_event_vmd.a 00:04:23.351 LIB libspdk_event_keyring.a 00:04:23.351 LIB libspdk_event_fsdev.a 00:04:23.351 LIB libspdk_event_iobuf.a 00:04:23.351 LIB libspdk_event_vhost_blk.a 00:04:23.351 LIB libspdk_event_sock.a 00:04:23.351 SO libspdk_event_scheduler.so.4.0 00:04:23.351 SO libspdk_event_keyring.so.1.0 00:04:23.351 SO libspdk_event_vmd.so.6.0 00:04:23.351 SO libspdk_event_vhost_blk.so.3.0 00:04:23.351 SO libspdk_event_fsdev.so.1.0 00:04:23.351 SO libspdk_event_iobuf.so.3.0 00:04:23.351 SO libspdk_event_sock.so.5.0 00:04:23.351 SYMLINK libspdk_event_scheduler.so 00:04:23.351 SYMLINK libspdk_event_keyring.so 00:04:23.351 SYMLINK libspdk_event_vmd.so 00:04:23.351 SYMLINK libspdk_event_fsdev.so 00:04:23.351 SYMLINK libspdk_event_vhost_blk.so 00:04:23.351 SYMLINK libspdk_event_sock.so 00:04:23.351 SYMLINK libspdk_event_iobuf.so 00:04:23.919 CC module/event/subsystems/accel/accel.o 00:04:23.919 LIB libspdk_event_accel.a 00:04:23.919 SO libspdk_event_accel.so.6.0 00:04:24.178 SYMLINK libspdk_event_accel.so 00:04:24.437 CC module/event/subsystems/bdev/bdev.o 00:04:24.695 LIB libspdk_event_bdev.a 00:04:24.695 SO libspdk_event_bdev.so.6.0 00:04:24.695 SYMLINK libspdk_event_bdev.so 00:04:25.264 CC module/event/subsystems/ublk/ublk.o 00:04:25.264 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:25.264 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:25.264 CC module/event/subsystems/scsi/scsi.o 00:04:25.264 CC module/event/subsystems/nbd/nbd.o 00:04:25.264 LIB libspdk_event_scsi.a 00:04:25.264 LIB libspdk_event_ublk.a 00:04:25.264 LIB libspdk_event_nbd.a 00:04:25.264 SO libspdk_event_ublk.so.3.0 00:04:25.264 SO libspdk_event_nbd.so.6.0 00:04:25.264 SO libspdk_event_scsi.so.6.0 00:04:25.264 SYMLINK libspdk_event_ublk.so 00:04:25.264 LIB libspdk_event_nvmf.a 00:04:25.264 SYMLINK libspdk_event_scsi.so 00:04:25.264 SYMLINK libspdk_event_nbd.so 00:04:25.524 SO libspdk_event_nvmf.so.6.0 00:04:25.524 SYMLINK libspdk_event_nvmf.so 00:04:25.784 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.784 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.784 LIB libspdk_event_vhost_scsi.a 00:04:26.044 LIB libspdk_event_iscsi.a 00:04:26.044 SO libspdk_event_vhost_scsi.so.3.0 00:04:26.044 SO libspdk_event_iscsi.so.6.0 00:04:26.044 SYMLINK libspdk_event_vhost_scsi.so 00:04:26.044 SYMLINK libspdk_event_iscsi.so 00:04:26.304 SO libspdk.so.6.0 00:04:26.304 SYMLINK libspdk.so 00:04:26.564 CC test/rpc_client/rpc_client_test.o 00:04:26.564 CXX app/trace/trace.o 00:04:26.564 TEST_HEADER include/spdk/accel.h 00:04:26.564 TEST_HEADER include/spdk/accel_module.h 00:04:26.564 TEST_HEADER include/spdk/assert.h 00:04:26.564 TEST_HEADER include/spdk/barrier.h 00:04:26.564 TEST_HEADER include/spdk/base64.h 00:04:26.564 TEST_HEADER include/spdk/bdev.h 00:04:26.564 TEST_HEADER include/spdk/bdev_module.h 00:04:26.564 TEST_HEADER include/spdk/bdev_zone.h 00:04:26.564 TEST_HEADER include/spdk/bit_array.h 00:04:26.564 TEST_HEADER include/spdk/bit_pool.h 00:04:26.564 TEST_HEADER include/spdk/blob_bdev.h 00:04:26.564 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:26.564 TEST_HEADER include/spdk/blobfs.h 00:04:26.564 TEST_HEADER include/spdk/blob.h 00:04:26.564 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.564 TEST_HEADER include/spdk/conf.h 00:04:26.564 TEST_HEADER include/spdk/config.h 00:04:26.564 TEST_HEADER include/spdk/cpuset.h 00:04:26.564 TEST_HEADER include/spdk/crc16.h 00:04:26.564 TEST_HEADER include/spdk/crc32.h 00:04:26.564 TEST_HEADER include/spdk/crc64.h 00:04:26.564 TEST_HEADER include/spdk/dif.h 00:04:26.564 TEST_HEADER include/spdk/dma.h 00:04:26.564 TEST_HEADER include/spdk/endian.h 00:04:26.564 TEST_HEADER include/spdk/env_dpdk.h 00:04:26.564 TEST_HEADER include/spdk/env.h 00:04:26.564 TEST_HEADER include/spdk/event.h 00:04:26.564 TEST_HEADER include/spdk/fd_group.h 00:04:26.564 TEST_HEADER include/spdk/fd.h 00:04:26.564 TEST_HEADER include/spdk/file.h 00:04:26.564 TEST_HEADER include/spdk/fsdev.h 00:04:26.564 TEST_HEADER include/spdk/fsdev_module.h 00:04:26.564 TEST_HEADER include/spdk/ftl.h 00:04:26.564 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:26.564 TEST_HEADER include/spdk/gpt_spec.h 00:04:26.564 CC examples/ioat/perf/perf.o 00:04:26.564 TEST_HEADER include/spdk/hexlify.h 00:04:26.564 TEST_HEADER include/spdk/histogram_data.h 00:04:26.564 CC examples/util/zipf/zipf.o 00:04:26.564 TEST_HEADER include/spdk/idxd.h 00:04:26.564 TEST_HEADER include/spdk/idxd_spec.h 00:04:26.564 TEST_HEADER include/spdk/init.h 00:04:26.564 TEST_HEADER include/spdk/ioat.h 00:04:26.564 TEST_HEADER include/spdk/ioat_spec.h 00:04:26.564 TEST_HEADER include/spdk/iscsi_spec.h 00:04:26.564 TEST_HEADER include/spdk/json.h 00:04:26.564 CC test/thread/poller_perf/poller_perf.o 00:04:26.564 TEST_HEADER include/spdk/jsonrpc.h 00:04:26.564 TEST_HEADER include/spdk/keyring.h 00:04:26.564 TEST_HEADER include/spdk/keyring_module.h 00:04:26.564 TEST_HEADER include/spdk/likely.h 00:04:26.564 TEST_HEADER include/spdk/log.h 00:04:26.564 TEST_HEADER include/spdk/lvol.h 00:04:26.564 TEST_HEADER include/spdk/md5.h 00:04:26.564 TEST_HEADER include/spdk/memory.h 00:04:26.822 CC test/dma/test_dma/test_dma.o 00:04:26.822 TEST_HEADER include/spdk/mmio.h 00:04:26.822 TEST_HEADER include/spdk/nbd.h 00:04:26.822 TEST_HEADER include/spdk/net.h 00:04:26.822 TEST_HEADER include/spdk/notify.h 00:04:26.822 TEST_HEADER include/spdk/nvme.h 00:04:26.822 TEST_HEADER include/spdk/nvme_intel.h 00:04:26.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:26.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:26.822 TEST_HEADER include/spdk/nvme_spec.h 00:04:26.822 TEST_HEADER include/spdk/nvme_zns.h 00:04:26.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:26.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:26.822 TEST_HEADER include/spdk/nvmf.h 00:04:26.822 TEST_HEADER include/spdk/nvmf_spec.h 00:04:26.823 TEST_HEADER include/spdk/nvmf_transport.h 00:04:26.823 CC test/app/bdev_svc/bdev_svc.o 00:04:26.823 TEST_HEADER include/spdk/opal.h 00:04:26.823 TEST_HEADER include/spdk/opal_spec.h 00:04:26.823 TEST_HEADER include/spdk/pci_ids.h 00:04:26.823 CC test/env/mem_callbacks/mem_callbacks.o 00:04:26.823 TEST_HEADER include/spdk/pipe.h 00:04:26.823 TEST_HEADER include/spdk/queue.h 00:04:26.823 TEST_HEADER include/spdk/reduce.h 00:04:26.823 TEST_HEADER include/spdk/rpc.h 00:04:26.823 TEST_HEADER include/spdk/scheduler.h 00:04:26.823 TEST_HEADER include/spdk/scsi.h 00:04:26.823 TEST_HEADER include/spdk/scsi_spec.h 00:04:26.823 TEST_HEADER include/spdk/sock.h 00:04:26.823 TEST_HEADER include/spdk/stdinc.h 00:04:26.823 TEST_HEADER include/spdk/string.h 00:04:26.823 TEST_HEADER include/spdk/thread.h 00:04:26.823 TEST_HEADER include/spdk/trace.h 00:04:26.823 TEST_HEADER include/spdk/trace_parser.h 00:04:26.823 TEST_HEADER include/spdk/tree.h 00:04:26.823 TEST_HEADER include/spdk/ublk.h 00:04:26.823 TEST_HEADER include/spdk/util.h 00:04:26.823 TEST_HEADER include/spdk/uuid.h 00:04:26.823 TEST_HEADER include/spdk/version.h 00:04:26.823 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:26.823 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:26.823 LINK rpc_client_test 00:04:26.823 TEST_HEADER include/spdk/vhost.h 00:04:26.823 TEST_HEADER include/spdk/vmd.h 00:04:26.823 TEST_HEADER include/spdk/xor.h 00:04:26.823 TEST_HEADER include/spdk/zipf.h 00:04:26.823 CXX test/cpp_headers/accel.o 00:04:26.823 LINK interrupt_tgt 00:04:26.823 LINK zipf 00:04:26.823 LINK poller_perf 00:04:26.823 LINK bdev_svc 00:04:26.823 LINK ioat_perf 00:04:26.823 CXX test/cpp_headers/accel_module.o 00:04:26.823 CXX test/cpp_headers/assert.o 00:04:27.081 LINK spdk_trace 00:04:27.081 CXX test/cpp_headers/barrier.o 00:04:27.081 CC test/env/vtophys/vtophys.o 00:04:27.081 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:27.081 CXX test/cpp_headers/base64.o 00:04:27.081 LINK vtophys 00:04:27.081 CC examples/ioat/verify/verify.o 00:04:27.081 CC test/env/memory/memory_ut.o 00:04:27.081 LINK test_dma 00:04:27.081 CC app/trace_record/trace_record.o 00:04:27.340 LINK env_dpdk_post_init 00:04:27.340 LINK mem_callbacks 00:04:27.340 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:27.340 CXX test/cpp_headers/bdev.o 00:04:27.340 CC test/event/event_perf/event_perf.o 00:04:27.340 CC test/event/reactor/reactor.o 00:04:27.340 LINK verify 00:04:27.340 CC test/event/reactor_perf/reactor_perf.o 00:04:27.340 LINK event_perf 00:04:27.340 LINK spdk_trace_record 00:04:27.340 CXX test/cpp_headers/bdev_module.o 00:04:27.340 CC test/event/app_repeat/app_repeat.o 00:04:27.599 CC test/app/histogram_perf/histogram_perf.o 00:04:27.599 LINK reactor 00:04:27.599 LINK reactor_perf 00:04:27.599 LINK app_repeat 00:04:27.599 CXX test/cpp_headers/bdev_zone.o 00:04:27.599 LINK histogram_perf 00:04:27.599 LINK nvme_fuzz 00:04:27.599 CXX test/cpp_headers/bit_array.o 00:04:27.599 CC examples/thread/thread/thread_ex.o 00:04:27.599 CC app/nvmf_tgt/nvmf_main.o 00:04:27.858 CC examples/sock/hello_world/hello_sock.o 00:04:27.858 CXX test/cpp_headers/bit_pool.o 00:04:27.858 CXX test/cpp_headers/blob_bdev.o 00:04:27.858 CC test/event/scheduler/scheduler.o 00:04:27.858 LINK nvmf_tgt 00:04:27.858 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.858 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.858 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.858 LINK thread 00:04:28.129 LINK scheduler 00:04:28.129 LINK hello_sock 00:04:28.129 CC test/accel/dif/dif.o 00:04:28.129 LINK lsvmd 00:04:28.129 CC test/blobfs/mkfs/mkfs.o 00:04:28.129 CXX test/cpp_headers/blobfs.o 00:04:28.129 CC app/iscsi_tgt/iscsi_tgt.o 00:04:28.418 LINK memory_ut 00:04:28.418 CC app/spdk_lspci/spdk_lspci.o 00:04:28.418 LINK mkfs 00:04:28.418 CC examples/vmd/led/led.o 00:04:28.418 CC app/spdk_tgt/spdk_tgt.o 00:04:28.418 CXX test/cpp_headers/blob.o 00:04:28.418 CC test/app/jsoncat/jsoncat.o 00:04:28.418 LINK iscsi_tgt 00:04:28.418 LINK spdk_lspci 00:04:28.418 LINK led 00:04:28.418 LINK jsoncat 00:04:28.418 CXX test/cpp_headers/conf.o 00:04:28.677 CC test/env/pci/pci_ut.o 00:04:28.677 LINK spdk_tgt 00:04:28.677 CXX test/cpp_headers/config.o 00:04:28.677 CC test/app/stub/stub.o 00:04:28.677 CXX test/cpp_headers/cpuset.o 00:04:28.677 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:28.677 LINK stub 00:04:28.936 LINK dif 00:04:28.936 CC examples/idxd/perf/perf.o 00:04:28.936 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:28.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.936 CXX test/cpp_headers/crc16.o 00:04:28.936 CC app/spdk_nvme_perf/perf.o 00:04:28.936 CC test/lvol/esnap/esnap.o 00:04:28.936 CXX test/cpp_headers/crc32.o 00:04:28.936 LINK pci_ut 00:04:29.196 CC test/nvme/aer/aer.o 00:04:29.196 LINK hello_fsdev 00:04:29.196 CXX test/cpp_headers/crc64.o 00:04:29.196 LINK idxd_perf 00:04:29.196 CC test/bdev/bdevio/bdevio.o 00:04:29.196 CXX test/cpp_headers/dif.o 00:04:29.196 LINK vhost_fuzz 00:04:29.196 CXX test/cpp_headers/dma.o 00:04:29.455 CXX test/cpp_headers/endian.o 00:04:29.455 LINK aer 00:04:29.455 CC examples/accel/perf/accel_perf.o 00:04:29.455 CC examples/blob/hello_world/hello_blob.o 00:04:29.455 CC test/nvme/reset/reset.o 00:04:29.455 CXX test/cpp_headers/env_dpdk.o 00:04:29.455 CXX test/cpp_headers/env.o 00:04:29.455 CC examples/nvme/hello_world/hello_world.o 00:04:29.714 LINK bdevio 00:04:29.714 LINK iscsi_fuzz 00:04:29.714 CXX test/cpp_headers/event.o 00:04:29.714 LINK hello_blob 00:04:29.714 LINK spdk_nvme_perf 00:04:29.714 LINK hello_world 00:04:29.714 CC test/nvme/sgl/sgl.o 00:04:29.714 LINK reset 00:04:29.972 CXX test/cpp_headers/fd_group.o 00:04:29.972 CC test/nvme/e2edp/nvme_dp.o 00:04:29.972 CC app/spdk_nvme_identify/identify.o 00:04:29.972 CC test/nvme/overhead/overhead.o 00:04:29.972 LINK accel_perf 00:04:29.972 CC test/nvme/err_injection/err_injection.o 00:04:29.972 CC examples/nvme/reconnect/reconnect.o 00:04:29.972 CC examples/blob/cli/blobcli.o 00:04:29.972 CXX test/cpp_headers/fd.o 00:04:29.972 LINK sgl 00:04:30.230 LINK nvme_dp 00:04:30.230 LINK err_injection 00:04:30.230 CXX test/cpp_headers/file.o 00:04:30.230 CXX test/cpp_headers/fsdev.o 00:04:30.230 LINK overhead 00:04:30.230 CC test/nvme/startup/startup.o 00:04:30.230 CXX test/cpp_headers/fsdev_module.o 00:04:30.490 LINK reconnect 00:04:30.490 LINK startup 00:04:30.490 CC examples/nvme/arbitration/arbitration.o 00:04:30.490 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:30.490 CXX test/cpp_headers/ftl.o 00:04:30.490 CC test/nvme/reserve/reserve.o 00:04:30.490 LINK blobcli 00:04:30.490 CC examples/bdev/hello_world/hello_bdev.o 00:04:30.749 CC test/nvme/simple_copy/simple_copy.o 00:04:30.749 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.749 CXX test/cpp_headers/fuse_dispatcher.o 00:04:30.749 LINK reserve 00:04:30.749 CC test/nvme/connect_stress/connect_stress.o 00:04:30.749 LINK arbitration 00:04:30.749 LINK hello_bdev 00:04:30.749 CXX test/cpp_headers/gpt_spec.o 00:04:31.009 LINK spdk_nvme_identify 00:04:31.009 LINK simple_copy 00:04:31.009 CC app/spdk_nvme_discover/discovery_aer.o 00:04:31.009 LINK connect_stress 00:04:31.009 LINK nvme_manage 00:04:31.009 CXX test/cpp_headers/hexlify.o 00:04:31.009 CC examples/nvme/hotplug/hotplug.o 00:04:31.009 CC app/spdk_top/spdk_top.o 00:04:31.269 LINK spdk_nvme_discover 00:04:31.269 CC test/nvme/boot_partition/boot_partition.o 00:04:31.269 CXX test/cpp_headers/histogram_data.o 00:04:31.269 CC app/vhost/vhost.o 00:04:31.269 CC test/nvme/compliance/nvme_compliance.o 00:04:31.269 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:31.269 CXX test/cpp_headers/idxd.o 00:04:31.269 LINK hotplug 00:04:31.269 LINK boot_partition 00:04:31.269 LINK vhost 00:04:31.527 CC examples/nvme/abort/abort.o 00:04:31.527 LINK cmb_copy 00:04:31.527 CXX test/cpp_headers/idxd_spec.o 00:04:31.527 CXX test/cpp_headers/init.o 00:04:31.527 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:31.527 LINK bdevperf 00:04:31.527 LINK nvme_compliance 00:04:31.527 CXX test/cpp_headers/ioat.o 00:04:31.810 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:31.810 CC test/nvme/fused_ordering/fused_ordering.o 00:04:31.810 LINK pmr_persistence 00:04:31.810 CC app/spdk_dd/spdk_dd.o 00:04:31.810 LINK abort 00:04:31.810 CXX test/cpp_headers/ioat_spec.o 00:04:31.810 CC test/nvme/fdp/fdp.o 00:04:31.810 CC test/nvme/cuse/cuse.o 00:04:31.810 LINK doorbell_aers 00:04:31.810 CXX test/cpp_headers/iscsi_spec.o 00:04:31.810 LINK fused_ordering 00:04:32.069 CXX test/cpp_headers/json.o 00:04:32.069 CXX test/cpp_headers/jsonrpc.o 00:04:32.069 CXX test/cpp_headers/keyring.o 00:04:32.069 LINK spdk_top 00:04:32.069 LINK spdk_dd 00:04:32.069 CC examples/nvmf/nvmf/nvmf.o 00:04:32.069 CC app/fio/nvme/fio_plugin.o 00:04:32.070 LINK fdp 00:04:32.070 CXX test/cpp_headers/keyring_module.o 00:04:32.329 CXX test/cpp_headers/likely.o 00:04:32.329 CXX test/cpp_headers/log.o 00:04:32.329 CC app/fio/bdev/fio_plugin.o 00:04:32.329 CXX test/cpp_headers/lvol.o 00:04:32.329 CXX test/cpp_headers/md5.o 00:04:32.329 CXX test/cpp_headers/memory.o 00:04:32.329 CXX test/cpp_headers/mmio.o 00:04:32.329 CXX test/cpp_headers/nbd.o 00:04:32.329 CXX test/cpp_headers/net.o 00:04:32.329 CXX test/cpp_headers/notify.o 00:04:32.587 LINK nvmf 00:04:32.587 CXX test/cpp_headers/nvme.o 00:04:32.587 CXX test/cpp_headers/nvme_intel.o 00:04:32.587 CXX test/cpp_headers/nvme_ocssd.o 00:04:32.587 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.587 CXX test/cpp_headers/nvme_spec.o 00:04:32.587 CXX test/cpp_headers/nvme_zns.o 00:04:32.587 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.587 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.846 CXX test/cpp_headers/nvmf.o 00:04:32.846 CXX test/cpp_headers/nvmf_spec.o 00:04:32.846 CXX test/cpp_headers/nvmf_transport.o 00:04:32.846 LINK spdk_bdev 00:04:32.846 LINK spdk_nvme 00:04:32.846 CXX test/cpp_headers/opal.o 00:04:32.846 CXX test/cpp_headers/opal_spec.o 00:04:32.846 CXX test/cpp_headers/pci_ids.o 00:04:32.846 CXX test/cpp_headers/pipe.o 00:04:32.846 CXX test/cpp_headers/queue.o 00:04:32.846 CXX test/cpp_headers/reduce.o 00:04:32.846 CXX test/cpp_headers/rpc.o 00:04:32.846 CXX test/cpp_headers/scheduler.o 00:04:32.846 CXX test/cpp_headers/scsi.o 00:04:33.106 CXX test/cpp_headers/scsi_spec.o 00:04:33.106 CXX test/cpp_headers/stdinc.o 00:04:33.106 CXX test/cpp_headers/sock.o 00:04:33.106 CXX test/cpp_headers/string.o 00:04:33.106 CXX test/cpp_headers/thread.o 00:04:33.106 CXX test/cpp_headers/trace.o 00:04:33.106 CXX test/cpp_headers/trace_parser.o 00:04:33.106 CXX test/cpp_headers/tree.o 00:04:33.106 LINK cuse 00:04:33.106 CXX test/cpp_headers/ublk.o 00:04:33.106 CXX test/cpp_headers/util.o 00:04:33.106 CXX test/cpp_headers/uuid.o 00:04:33.106 CXX test/cpp_headers/version.o 00:04:33.106 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.365 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.365 CXX test/cpp_headers/vhost.o 00:04:33.365 CXX test/cpp_headers/vmd.o 00:04:33.365 CXX test/cpp_headers/xor.o 00:04:33.365 CXX test/cpp_headers/zipf.o 00:04:35.273 LINK esnap 00:04:35.533 00:04:35.533 real 1m22.451s 00:04:35.533 user 7m8.130s 00:04:35.533 sys 1m53.194s 00:04:35.533 11:48:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:35.533 11:48:25 make -- common/autotest_common.sh@10 -- $ set +x 00:04:35.533 ************************************ 00:04:35.533 END TEST make 00:04:35.533 ************************************ 00:04:35.533 11:48:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:35.533 11:48:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:35.533 11:48:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:35.533 11:48:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.533 11:48:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:35.533 11:48:25 -- pm/common@44 -- $ pid=5287 00:04:35.533 11:48:25 -- pm/common@50 -- $ kill -TERM 5287 00:04:35.533 11:48:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.533 11:48:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:35.533 11:48:25 -- pm/common@44 -- $ pid=5289 00:04:35.533 11:48:25 -- pm/common@50 -- $ kill -TERM 5289 00:04:35.533 11:48:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:35.533 11:48:25 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.533 11:48:25 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.533 11:48:25 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.533 11:48:25 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.793 11:48:25 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.793 11:48:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.793 11:48:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.793 11:48:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.793 11:48:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.793 11:48:25 -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.793 11:48:25 -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.793 11:48:25 -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.793 11:48:25 -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.793 11:48:25 -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.793 11:48:25 -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.793 11:48:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.793 11:48:25 -- scripts/common.sh@344 -- # case "$op" in 00:04:35.793 11:48:25 -- scripts/common.sh@345 -- # : 1 00:04:35.793 11:48:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.793 11:48:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.793 11:48:25 -- scripts/common.sh@365 -- # decimal 1 00:04:35.793 11:48:25 -- scripts/common.sh@353 -- # local d=1 00:04:35.793 11:48:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.793 11:48:25 -- scripts/common.sh@355 -- # echo 1 00:04:35.793 11:48:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.793 11:48:25 -- scripts/common.sh@366 -- # decimal 2 00:04:35.793 11:48:25 -- scripts/common.sh@353 -- # local d=2 00:04:35.793 11:48:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.793 11:48:25 -- scripts/common.sh@355 -- # echo 2 00:04:35.793 11:48:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.793 11:48:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.793 11:48:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.793 11:48:25 -- scripts/common.sh@368 -- # return 0 00:04:35.793 11:48:25 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.793 11:48:25 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.793 --rc genhtml_branch_coverage=1 00:04:35.793 --rc genhtml_function_coverage=1 00:04:35.793 --rc genhtml_legend=1 00:04:35.793 --rc geninfo_all_blocks=1 00:04:35.793 --rc geninfo_unexecuted_blocks=1 00:04:35.793 00:04:35.793 ' 00:04:35.793 11:48:25 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.793 --rc genhtml_branch_coverage=1 00:04:35.793 --rc genhtml_function_coverage=1 00:04:35.793 --rc genhtml_legend=1 00:04:35.793 --rc geninfo_all_blocks=1 00:04:35.793 --rc geninfo_unexecuted_blocks=1 00:04:35.793 00:04:35.793 ' 00:04:35.793 11:48:25 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.793 --rc genhtml_branch_coverage=1 00:04:35.793 --rc genhtml_function_coverage=1 00:04:35.793 --rc genhtml_legend=1 00:04:35.793 --rc geninfo_all_blocks=1 00:04:35.794 --rc geninfo_unexecuted_blocks=1 00:04:35.794 00:04:35.794 ' 00:04:35.794 11:48:25 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.794 --rc genhtml_branch_coverage=1 00:04:35.794 --rc genhtml_function_coverage=1 00:04:35.794 --rc genhtml_legend=1 00:04:35.794 --rc geninfo_all_blocks=1 00:04:35.794 --rc geninfo_unexecuted_blocks=1 00:04:35.794 00:04:35.794 ' 00:04:35.794 11:48:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.794 11:48:25 -- nvmf/common.sh@7 -- # uname -s 00:04:35.794 11:48:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.794 11:48:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.794 11:48:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.794 11:48:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.794 11:48:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.794 11:48:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.794 11:48:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.794 11:48:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.794 11:48:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.794 11:48:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.794 11:48:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:76e2bd17-ea88-44b7-9470-3bc748526b9d 00:04:35.794 11:48:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=76e2bd17-ea88-44b7-9470-3bc748526b9d 00:04:35.794 11:48:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.794 11:48:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.794 11:48:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.794 11:48:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.794 11:48:25 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.794 11:48:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.794 11:48:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.794 11:48:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.794 11:48:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.794 11:48:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.794 11:48:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.794 11:48:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.794 11:48:25 -- paths/export.sh@5 -- # export PATH 00:04:35.794 11:48:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.794 11:48:25 -- nvmf/common.sh@51 -- # : 0 00:04:35.794 11:48:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.794 11:48:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.794 11:48:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.794 11:48:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.794 11:48:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.794 11:48:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.794 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.794 11:48:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.794 11:48:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.794 11:48:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.794 11:48:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.794 11:48:25 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.794 11:48:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.794 11:48:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:35.794 11:48:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.794 11:48:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.794 11:48:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.794 11:48:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:35.794 11:48:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:35.794 11:48:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:35.794 11:48:25 -- spdk/autotest.sh@48 -- # udevadm_pid=54746 00:04:35.794 11:48:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:35.794 11:48:25 -- pm/common@17 -- # local monitor 00:04:35.794 11:48:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.794 11:48:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.794 11:48:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:35.794 11:48:25 -- pm/common@21 -- # date +%s 00:04:35.794 11:48:25 -- pm/common@25 -- # sleep 1 00:04:35.794 11:48:25 -- pm/common@21 -- # date +%s 00:04:35.794 11:48:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732708105 00:04:35.794 11:48:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732708105 00:04:35.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732708105_collect-vmstat.pm.log 00:04:35.794 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732708105_collect-cpu-load.pm.log 00:04:37.176 11:48:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:37.176 11:48:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:37.176 11:48:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.176 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.176 11:48:26 -- spdk/autotest.sh@59 -- # create_test_list 00:04:37.176 11:48:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:37.176 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.176 11:48:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:37.176 11:48:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:37.176 11:48:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:37.176 11:48:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:37.176 11:48:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:37.176 11:48:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:37.176 11:48:26 -- common/autotest_common.sh@1457 -- # uname 00:04:37.176 11:48:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:37.176 11:48:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:37.176 11:48:26 -- common/autotest_common.sh@1477 -- # uname 00:04:37.176 11:48:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:37.176 11:48:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:37.176 11:48:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:37.176 lcov: LCOV version 1.15 00:04:37.176 11:48:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:52.090 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:52.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:06.982 11:48:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:06.982 11:48:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.982 11:48:56 -- common/autotest_common.sh@10 -- # set +x 00:05:06.982 11:48:56 -- spdk/autotest.sh@78 -- # rm -f 00:05:06.982 11:48:56 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.173 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:08.173 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:08.173 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:08.432 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:08.433 11:48:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:08.433 11:48:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:08.433 11:48:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:08.433 11:48:58 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.433 11:48:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:05:08.433 11:48:58 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:05:08.433 11:48:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:08.433 11:48:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.433 11:48:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:08.433 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.433 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.433 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:08.433 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:08.433 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:08.433 No valid GPT data, bailing 00:05:08.433 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.433 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.433 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.433 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:08.433 1+0 records in 00:05:08.433 1+0 records out 00:05:08.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173043 s, 60.6 MB/s 00:05:08.433 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.433 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.433 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:08.433 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:08.433 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:08.433 No valid GPT data, bailing 00:05:08.433 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:08.433 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.433 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.433 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:08.433 1+0 records in 00:05:08.433 1+0 records out 00:05:08.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062862 s, 167 MB/s 00:05:08.433 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.433 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.433 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:08.433 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:08.433 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:08.692 No valid GPT data, bailing 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.692 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.692 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:08.692 1+0 records in 00:05:08.692 1+0 records out 00:05:08.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651623 s, 161 MB/s 00:05:08.692 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.692 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.692 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:08.692 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:08.692 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:08.692 No valid GPT data, bailing 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.692 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.692 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:08.692 1+0 records in 00:05:08.692 1+0 records out 00:05:08.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603741 s, 174 MB/s 00:05:08.692 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.692 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.692 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:05:08.692 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:05:08.692 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:08.692 No valid GPT data, bailing 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.692 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.692 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:08.692 1+0 records in 00:05:08.692 1+0 records out 00:05:08.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596892 s, 176 MB/s 00:05:08.692 11:48:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.692 11:48:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.692 11:48:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:05:08.692 11:48:58 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:05:08.692 11:48:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:08.692 No valid GPT data, bailing 00:05:08.692 11:48:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:08.951 11:48:58 -- scripts/common.sh@394 -- # pt= 00:05:08.951 11:48:58 -- scripts/common.sh@395 -- # return 1 00:05:08.951 11:48:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:08.951 1+0 records in 00:05:08.951 1+0 records out 00:05:08.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624518 s, 168 MB/s 00:05:08.951 11:48:58 -- spdk/autotest.sh@105 -- # sync 00:05:08.951 11:48:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:08.951 11:48:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:08.951 11:48:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:12.249 11:49:01 -- spdk/autotest.sh@111 -- # uname -s 00:05:12.249 11:49:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:12.249 11:49:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:12.249 11:49:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:12.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.388 Hugepages 00:05:13.388 node hugesize free / total 00:05:13.388 node0 1048576kB 0 / 0 00:05:13.388 node0 2048kB 0 / 0 00:05:13.388 00:05:13.388 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.388 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:13.388 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:13.648 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:13.648 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:05:13.908 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:13.908 11:49:03 -- spdk/autotest.sh@117 -- # uname -s 00:05:13.908 11:49:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:13.908 11:49:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:13.908 11:49:03 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.417 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.417 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.417 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.417 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.417 11:49:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:16.797 11:49:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:16.797 11:49:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:16.797 11:49:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.797 11:49:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:16.797 11:49:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:16.797 11:49:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:16.797 11:49:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.797 11:49:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.797 11:49:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:16.797 11:49:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:16.797 11:49:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.797 11:49:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.317 Waiting for block devices as requested 00:05:17.581 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.581 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.840 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.840 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.122 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:23.122 11:49:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.122 11:49:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.122 11:49:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.122 11:49:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1543 -- # continue 00:05:23.122 11:49:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.122 11:49:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.122 11:49:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.122 11:49:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1543 -- # continue 00:05:23.122 11:49:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.122 11:49:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.122 11:49:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.122 11:49:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.122 11:49:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.122 11:49:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.122 11:49:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.122 11:49:13 -- common/autotest_common.sh@1543 -- # continue 00:05:23.122 11:49:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.122 11:49:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:23.122 11:49:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:23.122 11:49:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:23.122 11:49:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:23.122 11:49:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.122 11:49:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.122 11:49:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.122 11:49:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.122 11:49:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:23.122 11:49:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.123 11:49:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.123 11:49:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.123 11:49:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.123 11:49:13 -- common/autotest_common.sh@1543 -- # continue 00:05:23.123 11:49:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.123 11:49:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.123 11:49:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.123 11:49:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.123 11:49:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.123 11:49:13 -- common/autotest_common.sh@10 -- # set +x 00:05:23.123 11:49:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.629 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.629 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.629 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.889 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.889 11:49:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:24.889 11:49:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.889 11:49:14 -- common/autotest_common.sh@10 -- # set +x 00:05:24.889 11:49:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:24.889 11:49:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:24.889 11:49:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.889 11:49:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:24.889 11:49:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:24.889 11:49:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:24.889 11:49:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:24.889 11:49:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:24.889 11:49:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.889 11:49:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.889 11:49:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.889 11:49:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.889 11:49:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:25.149 11:49:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:25.149 11:49:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:25.149 11:49:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:25.149 11:49:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:25.149 11:49:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:25.149 11:49:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.149 11:49:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:25.149 11:49:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.149 11:49:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:25.149 11:49:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.149 11:49:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:25.149 11:49:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:25.149 11:49:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.149 11:49:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:25.149 11:49:15 -- common/autotest_common.sh@1572 -- # return 0 00:05:25.149 11:49:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:25.149 11:49:15 -- common/autotest_common.sh@1580 -- # return 0 00:05:25.149 11:49:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:25.149 11:49:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:25.149 11:49:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.149 11:49:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:25.149 11:49:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:25.149 11:49:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.149 11:49:15 -- common/autotest_common.sh@10 -- # set +x 00:05:25.149 11:49:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:25.149 11:49:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:25.149 11:49:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.149 11:49:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.149 11:49:15 -- common/autotest_common.sh@10 -- # set +x 00:05:25.149 ************************************ 00:05:25.149 START TEST env 00:05:25.149 ************************************ 00:05:25.149 11:49:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:25.149 * Looking for test storage... 00:05:25.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:25.149 11:49:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.149 11:49:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.149 11:49:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.409 11:49:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.409 11:49:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.409 11:49:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.409 11:49:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.409 11:49:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.409 11:49:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.409 11:49:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.409 11:49:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.409 11:49:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.409 11:49:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.409 11:49:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.409 11:49:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:25.409 11:49:15 env -- scripts/common.sh@345 -- # : 1 00:05:25.409 11:49:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.409 11:49:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.409 11:49:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:25.409 11:49:15 env -- scripts/common.sh@353 -- # local d=1 00:05:25.409 11:49:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.409 11:49:15 env -- scripts/common.sh@355 -- # echo 1 00:05:25.409 11:49:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.409 11:49:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:25.409 11:49:15 env -- scripts/common.sh@353 -- # local d=2 00:05:25.409 11:49:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.409 11:49:15 env -- scripts/common.sh@355 -- # echo 2 00:05:25.409 11:49:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.409 11:49:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.409 11:49:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.409 11:49:15 env -- scripts/common.sh@368 -- # return 0 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.409 --rc genhtml_branch_coverage=1 00:05:25.409 --rc genhtml_function_coverage=1 00:05:25.409 --rc genhtml_legend=1 00:05:25.409 --rc geninfo_all_blocks=1 00:05:25.409 --rc geninfo_unexecuted_blocks=1 00:05:25.409 00:05:25.409 ' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.409 --rc genhtml_branch_coverage=1 00:05:25.409 --rc genhtml_function_coverage=1 00:05:25.409 --rc genhtml_legend=1 00:05:25.409 --rc geninfo_all_blocks=1 00:05:25.409 --rc geninfo_unexecuted_blocks=1 00:05:25.409 00:05:25.409 ' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.409 --rc genhtml_branch_coverage=1 00:05:25.409 --rc genhtml_function_coverage=1 00:05:25.409 --rc genhtml_legend=1 00:05:25.409 --rc geninfo_all_blocks=1 00:05:25.409 --rc geninfo_unexecuted_blocks=1 00:05:25.409 00:05:25.409 ' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.409 --rc genhtml_branch_coverage=1 00:05:25.409 --rc genhtml_function_coverage=1 00:05:25.409 --rc genhtml_legend=1 00:05:25.409 --rc geninfo_all_blocks=1 00:05:25.409 --rc geninfo_unexecuted_blocks=1 00:05:25.409 00:05:25.409 ' 00:05:25.409 11:49:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.409 11:49:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.409 11:49:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.409 ************************************ 00:05:25.409 START TEST env_memory 00:05:25.409 ************************************ 00:05:25.409 11:49:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.409 00:05:25.409 00:05:25.409 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.409 http://cunit.sourceforge.net/ 00:05:25.409 00:05:25.409 00:05:25.409 Suite: memory 00:05:25.409 Test: alloc and free memory map ...[2024-11-27 11:49:15.373761] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.409 passed 00:05:25.409 Test: mem map translation ...[2024-11-27 11:49:15.417634] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.409 [2024-11-27 11:49:15.417693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.409 [2024-11-27 11:49:15.417760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.409 [2024-11-27 11:49:15.417783] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.669 passed 00:05:25.669 Test: mem map registration ...[2024-11-27 11:49:15.484543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:25.669 [2024-11-27 11:49:15.484586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:25.669 passed 00:05:25.669 Test: mem map adjacent registrations ...passed 00:05:25.669 00:05:25.669 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.669 suites 1 1 n/a 0 0 00:05:25.669 tests 4 4 4 0 0 00:05:25.669 asserts 152 152 152 0 n/a 00:05:25.669 00:05:25.669 Elapsed time = 0.239 seconds 00:05:25.669 00:05:25.669 real 0m0.297s 00:05:25.669 user 0m0.256s 00:05:25.669 sys 0m0.030s 00:05:25.669 11:49:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.669 11:49:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.669 ************************************ 00:05:25.669 END TEST env_memory 00:05:25.669 ************************************ 00:05:25.669 11:49:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.669 11:49:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.669 11:49:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.670 11:49:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.670 ************************************ 00:05:25.670 START TEST env_vtophys 00:05:25.670 ************************************ 00:05:25.670 11:49:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.929 EAL: lib.eal log level changed from notice to debug 00:05:25.929 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 1 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 2 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 3 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 4 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 5 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 6 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 7 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 8 as core 0 on socket 0 00:05:25.929 EAL: Detected lcore 9 as core 0 on socket 0 00:05:25.929 EAL: Maximum logical cores by configuration: 128 00:05:25.929 EAL: Detected CPU lcores: 10 00:05:25.929 EAL: Detected NUMA nodes: 1 00:05:25.929 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:25.929 EAL: Detected shared linkage of DPDK 00:05:25.929 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.929 EAL: Selected IOVA mode 'PA' 00:05:25.929 EAL: Probing VFIO support... 00:05:25.929 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.929 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:25.929 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.929 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.929 EAL: Setting up physically contiguous memory... 00:05:25.929 EAL: Setting maximum number of open files to 524288 00:05:25.929 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.929 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.929 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.929 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.929 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.929 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.929 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.929 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.929 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.929 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.929 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.929 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.929 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.929 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.929 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.929 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.929 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.929 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.929 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.929 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.929 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.929 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.929 EAL: Hugepages will be freed exactly as allocated. 00:05:25.929 EAL: No shared files mode enabled, IPC is disabled 00:05:25.929 EAL: No shared files mode enabled, IPC is disabled 00:05:25.929 EAL: TSC frequency is ~2490000 KHz 00:05:25.929 EAL: Main lcore 0 is ready (tid=7f83712c8a40;cpuset=[0]) 00:05:25.929 EAL: Trying to obtain current memory policy. 00:05:25.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.929 EAL: Restoring previous memory policy: 0 00:05:25.929 EAL: request: mp_malloc_sync 00:05:25.929 EAL: No shared files mode enabled, IPC is disabled 00:05:25.929 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.929 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.929 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.929 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.930 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:25.930 00:05:25.930 00:05:25.930 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.930 http://cunit.sourceforge.net/ 00:05:25.930 00:05:25.930 00:05:25.930 Suite: components_suite 00:05:26.498 Test: vtophys_malloc_test ...passed 00:05:26.498 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:26.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.498 EAL: Restoring previous memory policy: 4 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was expanded by 4MB 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was shrunk by 4MB 00:05:26.498 EAL: Trying to obtain current memory policy. 00:05:26.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.498 EAL: Restoring previous memory policy: 4 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was expanded by 6MB 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was shrunk by 6MB 00:05:26.498 EAL: Trying to obtain current memory policy. 00:05:26.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.498 EAL: Restoring previous memory policy: 4 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was expanded by 10MB 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was shrunk by 10MB 00:05:26.498 EAL: Trying to obtain current memory policy. 00:05:26.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.498 EAL: Restoring previous memory policy: 4 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was expanded by 18MB 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was shrunk by 18MB 00:05:26.498 EAL: Trying to obtain current memory policy. 00:05:26.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.498 EAL: Restoring previous memory policy: 4 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was expanded by 34MB 00:05:26.498 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.498 EAL: request: mp_malloc_sync 00:05:26.498 EAL: No shared files mode enabled, IPC is disabled 00:05:26.498 EAL: Heap on socket 0 was shrunk by 34MB 00:05:26.758 EAL: Trying to obtain current memory policy. 00:05:26.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.758 EAL: Restoring previous memory policy: 4 00:05:26.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.758 EAL: request: mp_malloc_sync 00:05:26.758 EAL: No shared files mode enabled, IPC is disabled 00:05:26.758 EAL: Heap on socket 0 was expanded by 66MB 00:05:26.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.758 EAL: request: mp_malloc_sync 00:05:26.758 EAL: No shared files mode enabled, IPC is disabled 00:05:26.758 EAL: Heap on socket 0 was shrunk by 66MB 00:05:26.758 EAL: Trying to obtain current memory policy. 00:05:26.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.017 EAL: Restoring previous memory policy: 4 00:05:27.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.017 EAL: request: mp_malloc_sync 00:05:27.017 EAL: No shared files mode enabled, IPC is disabled 00:05:27.017 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.277 EAL: request: mp_malloc_sync 00:05:27.277 EAL: No shared files mode enabled, IPC is disabled 00:05:27.277 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.277 EAL: Trying to obtain current memory policy. 00:05:27.277 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.277 EAL: Restoring previous memory policy: 4 00:05:27.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.277 EAL: request: mp_malloc_sync 00:05:27.277 EAL: No shared files mode enabled, IPC is disabled 00:05:27.277 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.846 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.846 EAL: request: mp_malloc_sync 00:05:27.846 EAL: No shared files mode enabled, IPC is disabled 00:05:27.846 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.414 EAL: Trying to obtain current memory policy. 00:05:28.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.414 EAL: Restoring previous memory policy: 4 00:05:28.414 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.414 EAL: request: mp_malloc_sync 00:05:28.414 EAL: No shared files mode enabled, IPC is disabled 00:05:28.414 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.353 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.353 EAL: request: mp_malloc_sync 00:05:29.353 EAL: No shared files mode enabled, IPC is disabled 00:05:29.353 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.291 EAL: Trying to obtain current memory policy. 00:05:30.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.551 EAL: Restoring previous memory policy: 4 00:05:30.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.551 EAL: request: mp_malloc_sync 00:05:30.551 EAL: No shared files mode enabled, IPC is disabled 00:05:30.551 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.463 EAL: request: mp_malloc_sync 00:05:32.463 EAL: No shared files mode enabled, IPC is disabled 00:05:32.463 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.372 passed 00:05:34.372 00:05:34.372 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.372 suites 1 1 n/a 0 0 00:05:34.372 tests 2 2 2 0 0 00:05:34.372 asserts 5740 5740 5740 0 n/a 00:05:34.372 00:05:34.372 Elapsed time = 7.926 seconds 00:05:34.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.372 EAL: request: mp_malloc_sync 00:05:34.372 EAL: No shared files mode enabled, IPC is disabled 00:05:34.372 EAL: Heap on socket 0 was shrunk by 2MB 00:05:34.372 EAL: No shared files mode enabled, IPC is disabled 00:05:34.372 EAL: No shared files mode enabled, IPC is disabled 00:05:34.372 EAL: No shared files mode enabled, IPC is disabled 00:05:34.372 00:05:34.372 real 0m8.274s 00:05:34.372 user 0m7.256s 00:05:34.372 sys 0m0.857s 00:05:34.372 11:49:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.372 11:49:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:34.372 ************************************ 00:05:34.372 END TEST env_vtophys 00:05:34.372 ************************************ 00:05:34.372 11:49:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.372 11:49:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.372 11:49:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.372 11:49:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.372 ************************************ 00:05:34.372 START TEST env_pci 00:05:34.372 ************************************ 00:05:34.372 11:49:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.372 00:05:34.372 00:05:34.372 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.372 http://cunit.sourceforge.net/ 00:05:34.372 00:05:34.372 00:05:34.372 Suite: pci 00:05:34.372 Test: pci_hook ...[2024-11-27 11:49:24.065389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57596 has claimed it 00:05:34.372 passed 00:05:34.372 00:05:34.372 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.372 suites 1 1 n/a 0 0 00:05:34.372 tests 1 1 1 0 0 00:05:34.372 asserts 25 25 25 0 n/a 00:05:34.372 00:05:34.372 Elapsed time = 0.010 secondsEAL: Cannot find device (10000:00:01.0) 00:05:34.372 EAL: Failed to attach device on primary process 00:05:34.372 00:05:34.372 00:05:34.372 real 0m0.117s 00:05:34.372 user 0m0.052s 00:05:34.372 sys 0m0.064s 00:05:34.372 11:49:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.372 11:49:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:34.372 ************************************ 00:05:34.372 END TEST env_pci 00:05:34.372 ************************************ 00:05:34.372 11:49:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.372 11:49:24 env -- env/env.sh@15 -- # uname 00:05:34.372 11:49:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.372 11:49:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.372 11:49:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.372 11:49:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:34.372 11:49:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.372 11:49:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.372 ************************************ 00:05:34.372 START TEST env_dpdk_post_init 00:05:34.372 ************************************ 00:05:34.372 11:49:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.372 EAL: Detected CPU lcores: 10 00:05:34.372 EAL: Detected NUMA nodes: 1 00:05:34.372 EAL: Detected shared linkage of DPDK 00:05:34.372 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.372 EAL: Selected IOVA mode 'PA' 00:05:34.632 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:34.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:34.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:34.632 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:34.632 Starting DPDK initialization... 00:05:34.632 Starting SPDK post initialization... 00:05:34.632 SPDK NVMe probe 00:05:34.632 Attaching to 0000:00:10.0 00:05:34.632 Attaching to 0000:00:11.0 00:05:34.632 Attaching to 0000:00:12.0 00:05:34.632 Attaching to 0000:00:13.0 00:05:34.632 Attached to 0000:00:10.0 00:05:34.632 Attached to 0000:00:11.0 00:05:34.632 Attached to 0000:00:13.0 00:05:34.632 Attached to 0000:00:12.0 00:05:34.632 Cleaning up... 00:05:34.632 00:05:34.632 real 0m0.317s 00:05:34.632 user 0m0.099s 00:05:34.632 sys 0m0.119s 00:05:34.632 11:49:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.632 ************************************ 00:05:34.632 END TEST env_dpdk_post_init 00:05:34.632 ************************************ 00:05:34.632 11:49:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.632 11:49:24 env -- env/env.sh@26 -- # uname 00:05:34.632 11:49:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.632 11:49:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.632 11:49:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.632 11:49:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.632 11:49:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.632 ************************************ 00:05:34.632 START TEST env_mem_callbacks 00:05:34.632 ************************************ 00:05:34.632 11:49:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.632 EAL: Detected CPU lcores: 10 00:05:34.632 EAL: Detected NUMA nodes: 1 00:05:34.632 EAL: Detected shared linkage of DPDK 00:05:34.892 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.892 EAL: Selected IOVA mode 'PA' 00:05:34.892 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.892 00:05:34.892 00:05:34.892 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.892 http://cunit.sourceforge.net/ 00:05:34.892 00:05:34.892 00:05:34.892 Suite: memory 00:05:34.892 Test: test ... 00:05:34.892 register 0x200000200000 2097152 00:05:34.892 malloc 3145728 00:05:34.892 register 0x200000400000 4194304 00:05:34.892 buf 0x2000004fffc0 len 3145728 PASSED 00:05:34.892 malloc 64 00:05:34.892 buf 0x2000004ffec0 len 64 PASSED 00:05:34.892 malloc 4194304 00:05:34.892 register 0x200000800000 6291456 00:05:34.892 buf 0x2000009fffc0 len 4194304 PASSED 00:05:34.892 free 0x2000004fffc0 3145728 00:05:34.892 free 0x2000004ffec0 64 00:05:34.892 unregister 0x200000400000 4194304 PASSED 00:05:34.892 free 0x2000009fffc0 4194304 00:05:34.892 unregister 0x200000800000 6291456 PASSED 00:05:34.892 malloc 8388608 00:05:34.892 register 0x200000400000 10485760 00:05:34.892 buf 0x2000005fffc0 len 8388608 PASSED 00:05:34.892 free 0x2000005fffc0 8388608 00:05:34.892 unregister 0x200000400000 10485760 PASSED 00:05:34.892 passed 00:05:34.892 00:05:34.892 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.892 suites 1 1 n/a 0 0 00:05:34.892 tests 1 1 1 0 0 00:05:34.892 asserts 15 15 15 0 n/a 00:05:34.892 00:05:34.892 Elapsed time = 0.083 seconds 00:05:34.892 00:05:34.892 real 0m0.295s 00:05:34.892 user 0m0.119s 00:05:34.892 sys 0m0.073s 00:05:34.892 11:49:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.892 ************************************ 00:05:34.892 END TEST env_mem_callbacks 00:05:34.892 ************************************ 00:05:34.892 11:49:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 ************************************ 00:05:35.152 END TEST env 00:05:35.152 ************************************ 00:05:35.152 00:05:35.152 real 0m9.918s 00:05:35.152 user 0m8.034s 00:05:35.152 sys 0m1.504s 00:05:35.152 11:49:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.152 11:49:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 11:49:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.152 11:49:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.152 11:49:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.152 11:49:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 ************************************ 00:05:35.152 START TEST rpc 00:05:35.152 ************************************ 00:05:35.152 11:49:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.152 * Looking for test storage... 00:05:35.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.152 11:49:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.152 11:49:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.152 11:49:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.413 11:49:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.413 11:49:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.413 11:49:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.413 11:49:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.413 11:49:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.413 11:49:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.413 11:49:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.413 11:49:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.413 11:49:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.413 11:49:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.413 11:49:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.413 11:49:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.413 11:49:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.413 11:49:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.413 11:49:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.413 --rc genhtml_branch_coverage=1 00:05:35.413 --rc genhtml_function_coverage=1 00:05:35.413 --rc genhtml_legend=1 00:05:35.413 --rc geninfo_all_blocks=1 00:05:35.413 --rc geninfo_unexecuted_blocks=1 00:05:35.413 00:05:35.413 ' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.413 --rc genhtml_branch_coverage=1 00:05:35.413 --rc genhtml_function_coverage=1 00:05:35.413 --rc genhtml_legend=1 00:05:35.413 --rc geninfo_all_blocks=1 00:05:35.413 --rc geninfo_unexecuted_blocks=1 00:05:35.413 00:05:35.413 ' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.413 --rc genhtml_branch_coverage=1 00:05:35.413 --rc genhtml_function_coverage=1 00:05:35.413 --rc genhtml_legend=1 00:05:35.413 --rc geninfo_all_blocks=1 00:05:35.413 --rc geninfo_unexecuted_blocks=1 00:05:35.413 00:05:35.413 ' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.413 --rc genhtml_branch_coverage=1 00:05:35.413 --rc genhtml_function_coverage=1 00:05:35.413 --rc genhtml_legend=1 00:05:35.413 --rc geninfo_all_blocks=1 00:05:35.413 --rc geninfo_unexecuted_blocks=1 00:05:35.413 00:05:35.413 ' 00:05:35.413 11:49:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57723 00:05:35.413 11:49:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:35.413 11:49:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.413 11:49:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57723 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 57723 ']' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.413 11:49:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.413 [2024-11-27 11:49:25.404383] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:35.413 [2024-11-27 11:49:25.404508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57723 ] 00:05:35.673 [2024-11-27 11:49:25.587261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.673 [2024-11-27 11:49:25.700173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:35.673 [2024-11-27 11:49:25.700239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57723' to capture a snapshot of events at runtime. 00:05:35.673 [2024-11-27 11:49:25.700253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.673 [2024-11-27 11:49:25.700267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.673 [2024-11-27 11:49:25.700278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57723 for offline analysis/debug. 00:05:35.673 [2024-11-27 11:49:25.701642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.674 11:49:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.674 11:49:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.674 11:49:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.674 11:49:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.674 11:49:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.674 11:49:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.674 11:49:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.674 11:49:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.674 11:49:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.674 ************************************ 00:05:36.674 START TEST rpc_integrity 00:05:36.674 ************************************ 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.674 { 00:05:36.674 "name": "Malloc0", 00:05:36.674 "aliases": [ 00:05:36.674 "f3a69402-6977-4a0f-9b52-fda9abe79b83" 00:05:36.674 ], 00:05:36.674 "product_name": "Malloc disk", 00:05:36.674 "block_size": 512, 00:05:36.674 "num_blocks": 16384, 00:05:36.674 "uuid": "f3a69402-6977-4a0f-9b52-fda9abe79b83", 00:05:36.674 "assigned_rate_limits": { 00:05:36.674 "rw_ios_per_sec": 0, 00:05:36.674 "rw_mbytes_per_sec": 0, 00:05:36.674 "r_mbytes_per_sec": 0, 00:05:36.674 "w_mbytes_per_sec": 0 00:05:36.674 }, 00:05:36.674 "claimed": false, 00:05:36.674 "zoned": false, 00:05:36.674 "supported_io_types": { 00:05:36.674 "read": true, 00:05:36.674 "write": true, 00:05:36.674 "unmap": true, 00:05:36.674 "flush": true, 00:05:36.674 "reset": true, 00:05:36.674 "nvme_admin": false, 00:05:36.674 "nvme_io": false, 00:05:36.674 "nvme_io_md": false, 00:05:36.674 "write_zeroes": true, 00:05:36.674 "zcopy": true, 00:05:36.674 "get_zone_info": false, 00:05:36.674 "zone_management": false, 00:05:36.674 "zone_append": false, 00:05:36.674 "compare": false, 00:05:36.674 "compare_and_write": false, 00:05:36.674 "abort": true, 00:05:36.674 "seek_hole": false, 00:05:36.674 "seek_data": false, 00:05:36.674 "copy": true, 00:05:36.674 "nvme_iov_md": false 00:05:36.674 }, 00:05:36.674 "memory_domains": [ 00:05:36.674 { 00:05:36.674 "dma_device_id": "system", 00:05:36.674 "dma_device_type": 1 00:05:36.674 }, 00:05:36.674 { 00:05:36.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.674 "dma_device_type": 2 00:05:36.674 } 00:05:36.674 ], 00:05:36.674 "driver_specific": {} 00:05:36.674 } 00:05:36.674 ]' 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.674 [2024-11-27 11:49:26.701020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:36.674 [2024-11-27 11:49:26.701087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.674 [2024-11-27 11:49:26.701124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:36.674 [2024-11-27 11:49:26.701149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.674 [2024-11-27 11:49:26.703747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.674 [2024-11-27 11:49:26.703796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.674 Passthru0 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.674 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.674 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.941 { 00:05:36.941 "name": "Malloc0", 00:05:36.941 "aliases": [ 00:05:36.941 "f3a69402-6977-4a0f-9b52-fda9abe79b83" 00:05:36.941 ], 00:05:36.941 "product_name": "Malloc disk", 00:05:36.941 "block_size": 512, 00:05:36.941 "num_blocks": 16384, 00:05:36.941 "uuid": "f3a69402-6977-4a0f-9b52-fda9abe79b83", 00:05:36.941 "assigned_rate_limits": { 00:05:36.941 "rw_ios_per_sec": 0, 00:05:36.941 "rw_mbytes_per_sec": 0, 00:05:36.941 "r_mbytes_per_sec": 0, 00:05:36.941 "w_mbytes_per_sec": 0 00:05:36.941 }, 00:05:36.941 "claimed": true, 00:05:36.941 "claim_type": "exclusive_write", 00:05:36.941 "zoned": false, 00:05:36.941 "supported_io_types": { 00:05:36.941 "read": true, 00:05:36.941 "write": true, 00:05:36.941 "unmap": true, 00:05:36.941 "flush": true, 00:05:36.941 "reset": true, 00:05:36.941 "nvme_admin": false, 00:05:36.941 "nvme_io": false, 00:05:36.941 "nvme_io_md": false, 00:05:36.941 "write_zeroes": true, 00:05:36.941 "zcopy": true, 00:05:36.941 "get_zone_info": false, 00:05:36.941 "zone_management": false, 00:05:36.941 "zone_append": false, 00:05:36.941 "compare": false, 00:05:36.941 "compare_and_write": false, 00:05:36.941 "abort": true, 00:05:36.941 "seek_hole": false, 00:05:36.941 "seek_data": false, 00:05:36.941 "copy": true, 00:05:36.941 "nvme_iov_md": false 00:05:36.941 }, 00:05:36.941 "memory_domains": [ 00:05:36.941 { 00:05:36.941 "dma_device_id": "system", 00:05:36.941 "dma_device_type": 1 00:05:36.941 }, 00:05:36.941 { 00:05:36.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.941 "dma_device_type": 2 00:05:36.941 } 00:05:36.941 ], 00:05:36.941 "driver_specific": {} 00:05:36.941 }, 00:05:36.941 { 00:05:36.941 "name": "Passthru0", 00:05:36.941 "aliases": [ 00:05:36.941 "04abddb2-7705-5178-8d0c-a86eb7b03dbd" 00:05:36.941 ], 00:05:36.941 "product_name": "passthru", 00:05:36.941 "block_size": 512, 00:05:36.941 "num_blocks": 16384, 00:05:36.941 "uuid": "04abddb2-7705-5178-8d0c-a86eb7b03dbd", 00:05:36.941 "assigned_rate_limits": { 00:05:36.941 "rw_ios_per_sec": 0, 00:05:36.941 "rw_mbytes_per_sec": 0, 00:05:36.941 "r_mbytes_per_sec": 0, 00:05:36.941 "w_mbytes_per_sec": 0 00:05:36.941 }, 00:05:36.941 "claimed": false, 00:05:36.941 "zoned": false, 00:05:36.941 "supported_io_types": { 00:05:36.941 "read": true, 00:05:36.941 "write": true, 00:05:36.941 "unmap": true, 00:05:36.941 "flush": true, 00:05:36.941 "reset": true, 00:05:36.941 "nvme_admin": false, 00:05:36.941 "nvme_io": false, 00:05:36.941 "nvme_io_md": false, 00:05:36.941 "write_zeroes": true, 00:05:36.941 "zcopy": true, 00:05:36.941 "get_zone_info": false, 00:05:36.941 "zone_management": false, 00:05:36.941 "zone_append": false, 00:05:36.941 "compare": false, 00:05:36.941 "compare_and_write": false, 00:05:36.941 "abort": true, 00:05:36.941 "seek_hole": false, 00:05:36.941 "seek_data": false, 00:05:36.941 "copy": true, 00:05:36.941 "nvme_iov_md": false 00:05:36.941 }, 00:05:36.941 "memory_domains": [ 00:05:36.941 { 00:05:36.941 "dma_device_id": "system", 00:05:36.941 "dma_device_type": 1 00:05:36.941 }, 00:05:36.941 { 00:05:36.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.941 "dma_device_type": 2 00:05:36.941 } 00:05:36.941 ], 00:05:36.941 "driver_specific": { 00:05:36.941 "passthru": { 00:05:36.941 "name": "Passthru0", 00:05:36.941 "base_bdev_name": "Malloc0" 00:05:36.941 } 00:05:36.941 } 00:05:36.941 } 00:05:36.941 ]' 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.941 11:49:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.941 00:05:36.941 real 0m0.328s 00:05:36.941 user 0m0.172s 00:05:36.941 sys 0m0.058s 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 ************************************ 00:05:36.941 END TEST rpc_integrity 00:05:36.941 ************************************ 00:05:36.941 11:49:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.941 11:49:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.941 11:49:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.941 11:49:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 ************************************ 00:05:36.941 START TEST rpc_plugins 00:05:36.941 ************************************ 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:36.941 11:49:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.941 11:49:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.941 11:49:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.941 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.201 11:49:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.201 11:49:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.201 { 00:05:37.201 "name": "Malloc1", 00:05:37.201 "aliases": [ 00:05:37.201 "454a7730-fbaa-443f-bbd6-811711edbf96" 00:05:37.201 ], 00:05:37.201 "product_name": "Malloc disk", 00:05:37.201 "block_size": 4096, 00:05:37.201 "num_blocks": 256, 00:05:37.201 "uuid": "454a7730-fbaa-443f-bbd6-811711edbf96", 00:05:37.201 "assigned_rate_limits": { 00:05:37.201 "rw_ios_per_sec": 0, 00:05:37.201 "rw_mbytes_per_sec": 0, 00:05:37.201 "r_mbytes_per_sec": 0, 00:05:37.201 "w_mbytes_per_sec": 0 00:05:37.201 }, 00:05:37.201 "claimed": false, 00:05:37.201 "zoned": false, 00:05:37.201 "supported_io_types": { 00:05:37.201 "read": true, 00:05:37.201 "write": true, 00:05:37.201 "unmap": true, 00:05:37.201 "flush": true, 00:05:37.201 "reset": true, 00:05:37.201 "nvme_admin": false, 00:05:37.201 "nvme_io": false, 00:05:37.201 "nvme_io_md": false, 00:05:37.201 "write_zeroes": true, 00:05:37.201 "zcopy": true, 00:05:37.201 "get_zone_info": false, 00:05:37.201 "zone_management": false, 00:05:37.201 "zone_append": false, 00:05:37.201 "compare": false, 00:05:37.201 "compare_and_write": false, 00:05:37.201 "abort": true, 00:05:37.201 "seek_hole": false, 00:05:37.201 "seek_data": false, 00:05:37.201 "copy": true, 00:05:37.201 "nvme_iov_md": false 00:05:37.201 }, 00:05:37.201 "memory_domains": [ 00:05:37.201 { 00:05:37.201 "dma_device_id": "system", 00:05:37.201 "dma_device_type": 1 00:05:37.201 }, 00:05:37.201 { 00:05:37.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.201 "dma_device_type": 2 00:05:37.201 } 00:05:37.201 ], 00:05:37.201 "driver_specific": {} 00:05:37.201 } 00:05:37.201 ]' 00:05:37.201 11:49:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.201 11:49:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.201 00:05:37.201 real 0m0.157s 00:05:37.201 user 0m0.089s 00:05:37.201 sys 0m0.024s 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.201 ************************************ 00:05:37.201 END TEST rpc_plugins 00:05:37.201 ************************************ 00:05:37.201 11:49:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.201 11:49:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.201 11:49:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.201 11:49:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.201 11:49:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.201 ************************************ 00:05:37.201 START TEST rpc_trace_cmd_test 00:05:37.201 ************************************ 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:37.202 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57723", 00:05:37.202 "tpoint_group_mask": "0x8", 00:05:37.202 "iscsi_conn": { 00:05:37.202 "mask": "0x2", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "scsi": { 00:05:37.202 "mask": "0x4", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "bdev": { 00:05:37.202 "mask": "0x8", 00:05:37.202 "tpoint_mask": "0xffffffffffffffff" 00:05:37.202 }, 00:05:37.202 "nvmf_rdma": { 00:05:37.202 "mask": "0x10", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "nvmf_tcp": { 00:05:37.202 "mask": "0x20", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "ftl": { 00:05:37.202 "mask": "0x40", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "blobfs": { 00:05:37.202 "mask": "0x80", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "dsa": { 00:05:37.202 "mask": "0x200", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "thread": { 00:05:37.202 "mask": "0x400", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "nvme_pcie": { 00:05:37.202 "mask": "0x800", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "iaa": { 00:05:37.202 "mask": "0x1000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "nvme_tcp": { 00:05:37.202 "mask": "0x2000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "bdev_nvme": { 00:05:37.202 "mask": "0x4000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "sock": { 00:05:37.202 "mask": "0x8000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "blob": { 00:05:37.202 "mask": "0x10000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "bdev_raid": { 00:05:37.202 "mask": "0x20000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 }, 00:05:37.202 "scheduler": { 00:05:37.202 "mask": "0x40000", 00:05:37.202 "tpoint_mask": "0x0" 00:05:37.202 } 00:05:37.202 }' 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:37.202 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.461 00:05:37.461 real 0m0.212s 00:05:37.461 user 0m0.173s 00:05:37.461 sys 0m0.029s 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.461 ************************************ 00:05:37.461 END TEST rpc_trace_cmd_test 00:05:37.461 11:49:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.461 ************************************ 00:05:37.461 11:49:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:37.461 11:49:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:37.461 11:49:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:37.461 11:49:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.461 11:49:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.461 11:49:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.461 ************************************ 00:05:37.461 START TEST rpc_daemon_integrity 00:05:37.461 ************************************ 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.461 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.462 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.462 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.462 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.462 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.721 { 00:05:37.721 "name": "Malloc2", 00:05:37.721 "aliases": [ 00:05:37.721 "7dbb75e5-f9a6-43d1-8fa1-9794f912d1b5" 00:05:37.721 ], 00:05:37.721 "product_name": "Malloc disk", 00:05:37.721 "block_size": 512, 00:05:37.721 "num_blocks": 16384, 00:05:37.721 "uuid": "7dbb75e5-f9a6-43d1-8fa1-9794f912d1b5", 00:05:37.721 "assigned_rate_limits": { 00:05:37.721 "rw_ios_per_sec": 0, 00:05:37.721 "rw_mbytes_per_sec": 0, 00:05:37.721 "r_mbytes_per_sec": 0, 00:05:37.721 "w_mbytes_per_sec": 0 00:05:37.721 }, 00:05:37.721 "claimed": false, 00:05:37.721 "zoned": false, 00:05:37.721 "supported_io_types": { 00:05:37.721 "read": true, 00:05:37.721 "write": true, 00:05:37.721 "unmap": true, 00:05:37.721 "flush": true, 00:05:37.721 "reset": true, 00:05:37.721 "nvme_admin": false, 00:05:37.721 "nvme_io": false, 00:05:37.721 "nvme_io_md": false, 00:05:37.721 "write_zeroes": true, 00:05:37.721 "zcopy": true, 00:05:37.721 "get_zone_info": false, 00:05:37.721 "zone_management": false, 00:05:37.721 "zone_append": false, 00:05:37.721 "compare": false, 00:05:37.721 "compare_and_write": false, 00:05:37.721 "abort": true, 00:05:37.721 "seek_hole": false, 00:05:37.721 "seek_data": false, 00:05:37.721 "copy": true, 00:05:37.721 "nvme_iov_md": false 00:05:37.721 }, 00:05:37.721 "memory_domains": [ 00:05:37.721 { 00:05:37.721 "dma_device_id": "system", 00:05:37.721 "dma_device_type": 1 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.721 "dma_device_type": 2 00:05:37.721 } 00:05:37.721 ], 00:05:37.721 "driver_specific": {} 00:05:37.721 } 00:05:37.721 ]' 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.721 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.721 [2024-11-27 11:49:27.591316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:37.721 [2024-11-27 11:49:27.591390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.721 [2024-11-27 11:49:27.591417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:37.721 [2024-11-27 11:49:27.591431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.721 [2024-11-27 11:49:27.593890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.721 [2024-11-27 11:49:27.593936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.722 Passthru0 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.722 { 00:05:37.722 "name": "Malloc2", 00:05:37.722 "aliases": [ 00:05:37.722 "7dbb75e5-f9a6-43d1-8fa1-9794f912d1b5" 00:05:37.722 ], 00:05:37.722 "product_name": "Malloc disk", 00:05:37.722 "block_size": 512, 00:05:37.722 "num_blocks": 16384, 00:05:37.722 "uuid": "7dbb75e5-f9a6-43d1-8fa1-9794f912d1b5", 00:05:37.722 "assigned_rate_limits": { 00:05:37.722 "rw_ios_per_sec": 0, 00:05:37.722 "rw_mbytes_per_sec": 0, 00:05:37.722 "r_mbytes_per_sec": 0, 00:05:37.722 "w_mbytes_per_sec": 0 00:05:37.722 }, 00:05:37.722 "claimed": true, 00:05:37.722 "claim_type": "exclusive_write", 00:05:37.722 "zoned": false, 00:05:37.722 "supported_io_types": { 00:05:37.722 "read": true, 00:05:37.722 "write": true, 00:05:37.722 "unmap": true, 00:05:37.722 "flush": true, 00:05:37.722 "reset": true, 00:05:37.722 "nvme_admin": false, 00:05:37.722 "nvme_io": false, 00:05:37.722 "nvme_io_md": false, 00:05:37.722 "write_zeroes": true, 00:05:37.722 "zcopy": true, 00:05:37.722 "get_zone_info": false, 00:05:37.722 "zone_management": false, 00:05:37.722 "zone_append": false, 00:05:37.722 "compare": false, 00:05:37.722 "compare_and_write": false, 00:05:37.722 "abort": true, 00:05:37.722 "seek_hole": false, 00:05:37.722 "seek_data": false, 00:05:37.722 "copy": true, 00:05:37.722 "nvme_iov_md": false 00:05:37.722 }, 00:05:37.722 "memory_domains": [ 00:05:37.722 { 00:05:37.722 "dma_device_id": "system", 00:05:37.722 "dma_device_type": 1 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.722 "dma_device_type": 2 00:05:37.722 } 00:05:37.722 ], 00:05:37.722 "driver_specific": {} 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "name": "Passthru0", 00:05:37.722 "aliases": [ 00:05:37.722 "5c999b17-81ab-56c1-9d36-5987cb297fb1" 00:05:37.722 ], 00:05:37.722 "product_name": "passthru", 00:05:37.722 "block_size": 512, 00:05:37.722 "num_blocks": 16384, 00:05:37.722 "uuid": "5c999b17-81ab-56c1-9d36-5987cb297fb1", 00:05:37.722 "assigned_rate_limits": { 00:05:37.722 "rw_ios_per_sec": 0, 00:05:37.722 "rw_mbytes_per_sec": 0, 00:05:37.722 "r_mbytes_per_sec": 0, 00:05:37.722 "w_mbytes_per_sec": 0 00:05:37.722 }, 00:05:37.722 "claimed": false, 00:05:37.722 "zoned": false, 00:05:37.722 "supported_io_types": { 00:05:37.722 "read": true, 00:05:37.722 "write": true, 00:05:37.722 "unmap": true, 00:05:37.722 "flush": true, 00:05:37.722 "reset": true, 00:05:37.722 "nvme_admin": false, 00:05:37.722 "nvme_io": false, 00:05:37.722 "nvme_io_md": false, 00:05:37.722 "write_zeroes": true, 00:05:37.722 "zcopy": true, 00:05:37.722 "get_zone_info": false, 00:05:37.722 "zone_management": false, 00:05:37.722 "zone_append": false, 00:05:37.722 "compare": false, 00:05:37.722 "compare_and_write": false, 00:05:37.722 "abort": true, 00:05:37.722 "seek_hole": false, 00:05:37.722 "seek_data": false, 00:05:37.722 "copy": true, 00:05:37.722 "nvme_iov_md": false 00:05:37.722 }, 00:05:37.722 "memory_domains": [ 00:05:37.722 { 00:05:37.722 "dma_device_id": "system", 00:05:37.722 "dma_device_type": 1 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.722 "dma_device_type": 2 00:05:37.722 } 00:05:37.722 ], 00:05:37.722 "driver_specific": { 00:05:37.722 "passthru": { 00:05:37.722 "name": "Passthru0", 00:05:37.722 "base_bdev_name": "Malloc2" 00:05:37.722 } 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ]' 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.722 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.982 11:49:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.982 00:05:37.982 real 0m0.345s 00:05:37.982 user 0m0.186s 00:05:37.982 sys 0m0.059s 00:05:37.982 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.982 11:49:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.982 ************************************ 00:05:37.982 END TEST rpc_daemon_integrity 00:05:37.982 ************************************ 00:05:37.982 11:49:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.982 11:49:27 rpc -- rpc/rpc.sh@84 -- # killprocess 57723 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 57723 ']' 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@958 -- # kill -0 57723 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57723 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.982 killing process with pid 57723 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57723' 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@973 -- # kill 57723 00:05:37.982 11:49:27 rpc -- common/autotest_common.sh@978 -- # wait 57723 00:05:40.523 00:05:40.523 real 0m5.190s 00:05:40.523 user 0m5.637s 00:05:40.523 sys 0m0.949s 00:05:40.523 11:49:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.523 11:49:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.523 ************************************ 00:05:40.523 END TEST rpc 00:05:40.523 ************************************ 00:05:40.523 11:49:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.523 11:49:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.523 11:49:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.523 11:49:30 -- common/autotest_common.sh@10 -- # set +x 00:05:40.523 ************************************ 00:05:40.523 START TEST skip_rpc 00:05:40.523 ************************************ 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.523 * Looking for test storage... 00:05:40.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.523 11:49:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.523 --rc genhtml_branch_coverage=1 00:05:40.523 --rc genhtml_function_coverage=1 00:05:40.523 --rc genhtml_legend=1 00:05:40.523 --rc geninfo_all_blocks=1 00:05:40.523 --rc geninfo_unexecuted_blocks=1 00:05:40.523 00:05:40.523 ' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.523 --rc genhtml_branch_coverage=1 00:05:40.523 --rc genhtml_function_coverage=1 00:05:40.523 --rc genhtml_legend=1 00:05:40.523 --rc geninfo_all_blocks=1 00:05:40.523 --rc geninfo_unexecuted_blocks=1 00:05:40.523 00:05:40.523 ' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.523 --rc genhtml_branch_coverage=1 00:05:40.523 --rc genhtml_function_coverage=1 00:05:40.523 --rc genhtml_legend=1 00:05:40.523 --rc geninfo_all_blocks=1 00:05:40.523 --rc geninfo_unexecuted_blocks=1 00:05:40.523 00:05:40.523 ' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.523 --rc genhtml_branch_coverage=1 00:05:40.523 --rc genhtml_function_coverage=1 00:05:40.523 --rc genhtml_legend=1 00:05:40.523 --rc geninfo_all_blocks=1 00:05:40.523 --rc geninfo_unexecuted_blocks=1 00:05:40.523 00:05:40.523 ' 00:05:40.523 11:49:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.523 11:49:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.523 11:49:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.523 11:49:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.523 ************************************ 00:05:40.523 START TEST skip_rpc 00:05:40.523 ************************************ 00:05:40.523 11:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:40.523 11:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57953 00:05:40.523 11:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.523 11:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.523 11:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.784 [2024-11-27 11:49:30.652647] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:40.784 [2024-11-27 11:49:30.652775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57953 ] 00:05:40.784 [2024-11-27 11:49:30.832877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.045 [2024-11-27 11:49:30.950476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57953 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57953 ']' 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57953 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57953 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.325 killing process with pid 57953 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57953' 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57953 00:05:46.325 11:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57953 00:05:48.235 ************************************ 00:05:48.235 END TEST skip_rpc 00:05:48.235 ************************************ 00:05:48.235 00:05:48.235 real 0m7.404s 00:05:48.235 user 0m6.895s 00:05:48.235 sys 0m0.424s 00:05:48.235 11:49:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.235 11:49:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.235 11:49:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.235 11:49:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.235 11:49:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.235 11:49:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.235 ************************************ 00:05:48.235 START TEST skip_rpc_with_json 00:05:48.235 ************************************ 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58066 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58066 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58066 ']' 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.235 11:49:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.235 [2024-11-27 11:49:38.141183] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:48.235 [2024-11-27 11:49:38.141334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:05:48.495 [2024-11-27 11:49:38.320883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.495 [2024-11-27 11:49:38.433400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.433 [2024-11-27 11:49:39.290507] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.433 request: 00:05:49.433 { 00:05:49.433 "trtype": "tcp", 00:05:49.433 "method": "nvmf_get_transports", 00:05:49.433 "req_id": 1 00:05:49.433 } 00:05:49.433 Got JSON-RPC error response 00:05:49.433 response: 00:05:49.433 { 00:05:49.433 "code": -19, 00:05:49.433 "message": "No such device" 00:05:49.433 } 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.433 [2024-11-27 11:49:39.306599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.433 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.693 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.693 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.693 { 00:05:49.693 "subsystems": [ 00:05:49.693 { 00:05:49.693 "subsystem": "fsdev", 00:05:49.693 "config": [ 00:05:49.693 { 00:05:49.693 "method": "fsdev_set_opts", 00:05:49.693 "params": { 00:05:49.693 "fsdev_io_pool_size": 65535, 00:05:49.693 "fsdev_io_cache_size": 256 00:05:49.693 } 00:05:49.693 } 00:05:49.693 ] 00:05:49.693 }, 00:05:49.693 { 00:05:49.693 "subsystem": "keyring", 00:05:49.693 "config": [] 00:05:49.693 }, 00:05:49.693 { 00:05:49.693 "subsystem": "iobuf", 00:05:49.693 "config": [ 00:05:49.693 { 00:05:49.693 "method": "iobuf_set_options", 00:05:49.693 "params": { 00:05:49.693 "small_pool_count": 8192, 00:05:49.694 "large_pool_count": 1024, 00:05:49.694 "small_bufsize": 8192, 00:05:49.694 "large_bufsize": 135168, 00:05:49.694 "enable_numa": false 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "sock", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "sock_set_default_impl", 00:05:49.694 "params": { 00:05:49.694 "impl_name": "posix" 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "sock_impl_set_options", 00:05:49.694 "params": { 00:05:49.694 "impl_name": "ssl", 00:05:49.694 "recv_buf_size": 4096, 00:05:49.694 "send_buf_size": 4096, 00:05:49.694 "enable_recv_pipe": true, 00:05:49.694 "enable_quickack": false, 00:05:49.694 "enable_placement_id": 0, 00:05:49.694 "enable_zerocopy_send_server": true, 00:05:49.694 "enable_zerocopy_send_client": false, 00:05:49.694 "zerocopy_threshold": 0, 00:05:49.694 "tls_version": 0, 00:05:49.694 "enable_ktls": false 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "sock_impl_set_options", 00:05:49.694 "params": { 00:05:49.694 "impl_name": "posix", 00:05:49.694 "recv_buf_size": 2097152, 00:05:49.694 "send_buf_size": 2097152, 00:05:49.694 "enable_recv_pipe": true, 00:05:49.694 "enable_quickack": false, 00:05:49.694 "enable_placement_id": 0, 00:05:49.694 "enable_zerocopy_send_server": true, 00:05:49.694 "enable_zerocopy_send_client": false, 00:05:49.694 "zerocopy_threshold": 0, 00:05:49.694 "tls_version": 0, 00:05:49.694 "enable_ktls": false 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "vmd", 00:05:49.694 "config": [] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "accel", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "accel_set_options", 00:05:49.694 "params": { 00:05:49.694 "small_cache_size": 128, 00:05:49.694 "large_cache_size": 16, 00:05:49.694 "task_count": 2048, 00:05:49.694 "sequence_count": 2048, 00:05:49.694 "buf_count": 2048 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "bdev", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "bdev_set_options", 00:05:49.694 "params": { 00:05:49.694 "bdev_io_pool_size": 65535, 00:05:49.694 "bdev_io_cache_size": 256, 00:05:49.694 "bdev_auto_examine": true, 00:05:49.694 "iobuf_small_cache_size": 128, 00:05:49.694 "iobuf_large_cache_size": 16 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "bdev_raid_set_options", 00:05:49.694 "params": { 00:05:49.694 "process_window_size_kb": 1024, 00:05:49.694 "process_max_bandwidth_mb_sec": 0 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "bdev_iscsi_set_options", 00:05:49.694 "params": { 00:05:49.694 "timeout_sec": 30 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "bdev_nvme_set_options", 00:05:49.694 "params": { 00:05:49.694 "action_on_timeout": "none", 00:05:49.694 "timeout_us": 0, 00:05:49.694 "timeout_admin_us": 0, 00:05:49.694 "keep_alive_timeout_ms": 10000, 00:05:49.694 "arbitration_burst": 0, 00:05:49.694 "low_priority_weight": 0, 00:05:49.694 "medium_priority_weight": 0, 00:05:49.694 "high_priority_weight": 0, 00:05:49.694 "nvme_adminq_poll_period_us": 10000, 00:05:49.694 "nvme_ioq_poll_period_us": 0, 00:05:49.694 "io_queue_requests": 0, 00:05:49.694 "delay_cmd_submit": true, 00:05:49.694 "transport_retry_count": 4, 00:05:49.694 "bdev_retry_count": 3, 00:05:49.694 "transport_ack_timeout": 0, 00:05:49.694 "ctrlr_loss_timeout_sec": 0, 00:05:49.694 "reconnect_delay_sec": 0, 00:05:49.694 "fast_io_fail_timeout_sec": 0, 00:05:49.694 "disable_auto_failback": false, 00:05:49.694 "generate_uuids": false, 00:05:49.694 "transport_tos": 0, 00:05:49.694 "nvme_error_stat": false, 00:05:49.694 "rdma_srq_size": 0, 00:05:49.694 "io_path_stat": false, 00:05:49.694 "allow_accel_sequence": false, 00:05:49.694 "rdma_max_cq_size": 0, 00:05:49.694 "rdma_cm_event_timeout_ms": 0, 00:05:49.694 "dhchap_digests": [ 00:05:49.694 "sha256", 00:05:49.694 "sha384", 00:05:49.694 "sha512" 00:05:49.694 ], 00:05:49.694 "dhchap_dhgroups": [ 00:05:49.694 "null", 00:05:49.694 "ffdhe2048", 00:05:49.694 "ffdhe3072", 00:05:49.694 "ffdhe4096", 00:05:49.694 "ffdhe6144", 00:05:49.694 "ffdhe8192" 00:05:49.694 ] 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "bdev_nvme_set_hotplug", 00:05:49.694 "params": { 00:05:49.694 "period_us": 100000, 00:05:49.694 "enable": false 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "bdev_wait_for_examine" 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "scsi", 00:05:49.694 "config": null 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "scheduler", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "framework_set_scheduler", 00:05:49.694 "params": { 00:05:49.694 "name": "static" 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "vhost_scsi", 00:05:49.694 "config": [] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "vhost_blk", 00:05:49.694 "config": [] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "ublk", 00:05:49.694 "config": [] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "nbd", 00:05:49.694 "config": [] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "nvmf", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "nvmf_set_config", 00:05:49.694 "params": { 00:05:49.694 "discovery_filter": "match_any", 00:05:49.694 "admin_cmd_passthru": { 00:05:49.694 "identify_ctrlr": false 00:05:49.694 }, 00:05:49.694 "dhchap_digests": [ 00:05:49.694 "sha256", 00:05:49.694 "sha384", 00:05:49.694 "sha512" 00:05:49.694 ], 00:05:49.694 "dhchap_dhgroups": [ 00:05:49.694 "null", 00:05:49.694 "ffdhe2048", 00:05:49.694 "ffdhe3072", 00:05:49.694 "ffdhe4096", 00:05:49.694 "ffdhe6144", 00:05:49.694 "ffdhe8192" 00:05:49.694 ] 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "nvmf_set_max_subsystems", 00:05:49.694 "params": { 00:05:49.694 "max_subsystems": 1024 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "nvmf_set_crdt", 00:05:49.694 "params": { 00:05:49.694 "crdt1": 0, 00:05:49.694 "crdt2": 0, 00:05:49.694 "crdt3": 0 00:05:49.694 } 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "method": "nvmf_create_transport", 00:05:49.694 "params": { 00:05:49.694 "trtype": "TCP", 00:05:49.694 "max_queue_depth": 128, 00:05:49.694 "max_io_qpairs_per_ctrlr": 127, 00:05:49.694 "in_capsule_data_size": 4096, 00:05:49.694 "max_io_size": 131072, 00:05:49.694 "io_unit_size": 131072, 00:05:49.694 "max_aq_depth": 128, 00:05:49.694 "num_shared_buffers": 511, 00:05:49.694 "buf_cache_size": 4294967295, 00:05:49.694 "dif_insert_or_strip": false, 00:05:49.694 "zcopy": false, 00:05:49.694 "c2h_success": true, 00:05:49.694 "sock_priority": 0, 00:05:49.694 "abort_timeout_sec": 1, 00:05:49.694 "ack_timeout": 0, 00:05:49.694 "data_wr_pool_size": 0 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 }, 00:05:49.694 { 00:05:49.694 "subsystem": "iscsi", 00:05:49.694 "config": [ 00:05:49.694 { 00:05:49.694 "method": "iscsi_set_options", 00:05:49.694 "params": { 00:05:49.694 "node_base": "iqn.2016-06.io.spdk", 00:05:49.694 "max_sessions": 128, 00:05:49.694 "max_connections_per_session": 2, 00:05:49.694 "max_queue_depth": 64, 00:05:49.694 "default_time2wait": 2, 00:05:49.694 "default_time2retain": 20, 00:05:49.694 "first_burst_length": 8192, 00:05:49.694 "immediate_data": true, 00:05:49.694 "allow_duplicated_isid": false, 00:05:49.694 "error_recovery_level": 0, 00:05:49.694 "nop_timeout": 60, 00:05:49.694 "nop_in_interval": 30, 00:05:49.694 "disable_chap": false, 00:05:49.694 "require_chap": false, 00:05:49.694 "mutual_chap": false, 00:05:49.694 "chap_group": 0, 00:05:49.694 "max_large_datain_per_connection": 64, 00:05:49.694 "max_r2t_per_connection": 4, 00:05:49.694 "pdu_pool_size": 36864, 00:05:49.694 "immediate_data_pool_size": 16384, 00:05:49.694 "data_out_pool_size": 2048 00:05:49.694 } 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 } 00:05:49.694 ] 00:05:49.694 } 00:05:49.694 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.694 11:49:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58066 00:05:49.694 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58066 ']' 00:05:49.694 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58066 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58066 00:05:49.695 killing process with pid 58066 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58066' 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58066 00:05:49.695 11:49:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58066 00:05:52.232 11:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58112 00:05:52.232 11:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.232 11:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58112 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58112 ']' 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58112 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58112 00:05:57.509 killing process with pid 58112 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58112' 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58112 00:05:57.509 11:49:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58112 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.418 00:05:59.418 real 0m11.255s 00:05:59.418 user 0m10.629s 00:05:59.418 sys 0m0.930s 00:05:59.418 ************************************ 00:05:59.418 END TEST skip_rpc_with_json 00:05:59.418 ************************************ 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.418 11:49:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.418 11:49:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.418 11:49:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.418 11:49:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.418 ************************************ 00:05:59.418 START TEST skip_rpc_with_delay 00:05:59.418 ************************************ 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.418 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.678 [2024-11-27 11:49:49.468719] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.678 ************************************ 00:05:59.678 END TEST skip_rpc_with_delay 00:05:59.678 ************************************ 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.678 00:05:59.678 real 0m0.176s 00:05:59.678 user 0m0.086s 00:05:59.678 sys 0m0.089s 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.678 11:49:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.678 11:49:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.678 11:49:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.678 11:49:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.678 11:49:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.678 11:49:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.678 11:49:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.678 ************************************ 00:05:59.678 START TEST exit_on_failed_rpc_init 00:05:59.678 ************************************ 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58251 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58251 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58251 ']' 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.678 11:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.678 [2024-11-27 11:49:49.717960] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:05:59.678 [2024-11-27 11:49:49.718104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58251 ] 00:05:59.938 [2024-11-27 11:49:49.902156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.198 [2024-11-27 11:49:50.016672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.137 11:49:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.137 [2024-11-27 11:49:50.981854] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:01.137 [2024-11-27 11:49:50.982346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:06:01.137 [2024-11-27 11:49:51.161559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.397 [2024-11-27 11:49:51.276988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.397 [2024-11-27 11:49:51.277290] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.397 [2024-11-27 11:49:51.277314] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.397 [2024-11-27 11:49:51.277335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58251 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58251 ']' 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58251 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58251 00:06:01.668 killing process with pid 58251 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58251' 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58251 00:06:01.668 11:49:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58251 00:06:04.210 00:06:04.210 real 0m4.308s 00:06:04.210 user 0m4.574s 00:06:04.210 sys 0m0.637s 00:06:04.210 11:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.210 ************************************ 00:06:04.210 END TEST exit_on_failed_rpc_init 00:06:04.210 ************************************ 00:06:04.210 11:49:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.210 11:49:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.210 00:06:04.210 real 0m23.668s 00:06:04.210 user 0m22.395s 00:06:04.210 sys 0m2.403s 00:06:04.210 11:49:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.210 11:49:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.210 ************************************ 00:06:04.210 END TEST skip_rpc 00:06:04.210 ************************************ 00:06:04.210 11:49:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:04.210 11:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.210 11:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.210 11:49:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.210 ************************************ 00:06:04.210 START TEST rpc_client 00:06:04.210 ************************************ 00:06:04.210 11:49:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:04.210 * Looking for test storage... 00:06:04.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:04.210 11:49:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.210 11:49:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.210 11:49:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.527 11:49:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.527 --rc genhtml_branch_coverage=1 00:06:04.527 --rc genhtml_function_coverage=1 00:06:04.527 --rc genhtml_legend=1 00:06:04.527 --rc geninfo_all_blocks=1 00:06:04.527 --rc geninfo_unexecuted_blocks=1 00:06:04.527 00:06:04.527 ' 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.527 --rc genhtml_branch_coverage=1 00:06:04.527 --rc genhtml_function_coverage=1 00:06:04.527 --rc genhtml_legend=1 00:06:04.527 --rc geninfo_all_blocks=1 00:06:04.527 --rc geninfo_unexecuted_blocks=1 00:06:04.527 00:06:04.527 ' 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.527 --rc genhtml_branch_coverage=1 00:06:04.527 --rc genhtml_function_coverage=1 00:06:04.527 --rc genhtml_legend=1 00:06:04.527 --rc geninfo_all_blocks=1 00:06:04.527 --rc geninfo_unexecuted_blocks=1 00:06:04.527 00:06:04.527 ' 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.527 --rc genhtml_branch_coverage=1 00:06:04.527 --rc genhtml_function_coverage=1 00:06:04.527 --rc genhtml_legend=1 00:06:04.527 --rc geninfo_all_blocks=1 00:06:04.527 --rc geninfo_unexecuted_blocks=1 00:06:04.527 00:06:04.527 ' 00:06:04.527 11:49:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:04.527 OK 00:06:04.527 11:49:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:04.527 00:06:04.527 real 0m0.327s 00:06:04.527 user 0m0.178s 00:06:04.527 sys 0m0.165s 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.527 11:49:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:04.527 ************************************ 00:06:04.527 END TEST rpc_client 00:06:04.527 ************************************ 00:06:04.527 11:49:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:04.527 11:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.527 11:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.527 11:49:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.527 ************************************ 00:06:04.527 START TEST json_config 00:06:04.527 ************************************ 00:06:04.527 11:49:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:04.790 11:49:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.790 11:49:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.790 11:49:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.790 11:49:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.790 11:49:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.790 11:49:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.790 11:49:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.790 11:49:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.790 11:49:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.790 11:49:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.790 11:49:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.790 11:49:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.790 11:49:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.790 11:49:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.790 11:49:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.790 11:49:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:04.790 11:49:54 json_config -- scripts/common.sh@345 -- # : 1 00:06:04.790 11:49:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.790 11:49:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.790 11:49:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:04.790 11:49:54 json_config -- scripts/common.sh@353 -- # local d=1 00:06:04.790 11:49:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.790 11:49:54 json_config -- scripts/common.sh@355 -- # echo 1 00:06:04.791 11:49:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.791 11:49:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:04.791 11:49:54 json_config -- scripts/common.sh@353 -- # local d=2 00:06:04.791 11:49:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.791 11:49:54 json_config -- scripts/common.sh@355 -- # echo 2 00:06:04.791 11:49:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.791 11:49:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.791 11:49:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.791 11:49:54 json_config -- scripts/common.sh@368 -- # return 0 00:06:04.791 11:49:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.791 11:49:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.791 --rc genhtml_branch_coverage=1 00:06:04.791 --rc genhtml_function_coverage=1 00:06:04.791 --rc genhtml_legend=1 00:06:04.791 --rc geninfo_all_blocks=1 00:06:04.791 --rc geninfo_unexecuted_blocks=1 00:06:04.791 00:06:04.791 ' 00:06:04.791 11:49:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.791 --rc genhtml_branch_coverage=1 00:06:04.791 --rc genhtml_function_coverage=1 00:06:04.791 --rc genhtml_legend=1 00:06:04.791 --rc geninfo_all_blocks=1 00:06:04.791 --rc geninfo_unexecuted_blocks=1 00:06:04.791 00:06:04.791 ' 00:06:04.791 11:49:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.791 --rc genhtml_branch_coverage=1 00:06:04.791 --rc genhtml_function_coverage=1 00:06:04.791 --rc genhtml_legend=1 00:06:04.791 --rc geninfo_all_blocks=1 00:06:04.791 --rc geninfo_unexecuted_blocks=1 00:06:04.791 00:06:04.791 ' 00:06:04.791 11:49:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.791 --rc genhtml_branch_coverage=1 00:06:04.791 --rc genhtml_function_coverage=1 00:06:04.791 --rc genhtml_legend=1 00:06:04.791 --rc geninfo_all_blocks=1 00:06:04.791 --rc geninfo_unexecuted_blocks=1 00:06:04.791 00:06:04.791 ' 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:76e2bd17-ea88-44b7-9470-3bc748526b9d 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=76e2bd17-ea88-44b7-9470-3bc748526b9d 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.791 11:49:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.791 11:49:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.791 11:49:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.791 11:49:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.791 11:49:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.791 11:49:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.791 11:49:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.791 11:49:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.791 11:49:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@51 -- # : 0 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.791 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.791 11:49:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.791 11:49:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:04.791 WARNING: No tests are enabled so not running JSON configuration tests 00:06:04.792 11:49:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:04.792 00:06:04.792 real 0m0.239s 00:06:04.792 user 0m0.134s 00:06:04.792 sys 0m0.103s 00:06:04.792 11:49:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.792 11:49:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.792 ************************************ 00:06:04.792 END TEST json_config 00:06:04.792 ************************************ 00:06:04.792 11:49:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:04.792 11:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.792 11:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.792 11:49:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.792 ************************************ 00:06:04.792 START TEST json_config_extra_key 00:06:04.792 ************************************ 00:06:04.792 11:49:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.052 11:49:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.052 --rc genhtml_branch_coverage=1 00:06:05.052 --rc genhtml_function_coverage=1 00:06:05.052 --rc genhtml_legend=1 00:06:05.052 --rc geninfo_all_blocks=1 00:06:05.052 --rc geninfo_unexecuted_blocks=1 00:06:05.052 00:06:05.052 ' 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.052 --rc genhtml_branch_coverage=1 00:06:05.052 --rc genhtml_function_coverage=1 00:06:05.052 --rc genhtml_legend=1 00:06:05.052 --rc geninfo_all_blocks=1 00:06:05.052 --rc geninfo_unexecuted_blocks=1 00:06:05.052 00:06:05.052 ' 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.052 --rc genhtml_branch_coverage=1 00:06:05.052 --rc genhtml_function_coverage=1 00:06:05.052 --rc genhtml_legend=1 00:06:05.052 --rc geninfo_all_blocks=1 00:06:05.052 --rc geninfo_unexecuted_blocks=1 00:06:05.052 00:06:05.052 ' 00:06:05.052 11:49:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.052 --rc genhtml_branch_coverage=1 00:06:05.052 --rc genhtml_function_coverage=1 00:06:05.053 --rc genhtml_legend=1 00:06:05.053 --rc geninfo_all_blocks=1 00:06:05.053 --rc geninfo_unexecuted_blocks=1 00:06:05.053 00:06:05.053 ' 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:76e2bd17-ea88-44b7-9470-3bc748526b9d 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=76e2bd17-ea88-44b7-9470-3bc748526b9d 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.053 11:49:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.053 11:49:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.053 11:49:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.053 11:49:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.053 11:49:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.053 11:49:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.053 11:49:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.053 11:49:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:05.053 11:49:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.053 11:49:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:05.053 INFO: launching applications... 00:06:05.053 11:49:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.053 11:49:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:05.053 11:49:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:05.053 11:49:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58479 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.053 Waiting for target to run... 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.053 11:49:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58479 /var/tmp/spdk_tgt.sock 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58479 ']' 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.053 11:49:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.313 [2024-11-27 11:49:55.111789] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:05.313 [2024-11-27 11:49:55.111919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58479 ] 00:06:05.573 [2024-11-27 11:49:55.519958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.832 [2024-11-27 11:49:55.627999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.401 00:06:06.401 INFO: shutting down applications... 00:06:06.401 11:49:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.401 11:49:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:06.401 11:49:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:06.401 11:49:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58479 ]] 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58479 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.401 11:49:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:06.402 11:49:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.970 11:49:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.970 11:49:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.970 11:49:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:06.970 11:49:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.539 11:49:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.539 11:49:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.539 11:49:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:07.539 11:49:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.107 11:49:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.107 11:49:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.107 11:49:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:08.107 11:49:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.367 11:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.367 11:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.367 11:49:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:08.367 11:49:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.936 11:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.936 11:49:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.936 11:49:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:08.936 11:49:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58479 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:09.506 SPDK target shutdown done 00:06:09.506 11:49:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:09.506 Success 00:06:09.506 11:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:09.506 00:06:09.506 real 0m4.617s 00:06:09.506 user 0m3.931s 00:06:09.506 sys 0m0.621s 00:06:09.506 11:49:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.506 11:49:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 ************************************ 00:06:09.506 END TEST json_config_extra_key 00:06:09.506 ************************************ 00:06:09.506 11:49:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.506 11:49:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.506 11:49:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.506 11:49:59 -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 ************************************ 00:06:09.506 START TEST alias_rpc 00:06:09.506 ************************************ 00:06:09.506 11:49:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.767 * Looking for test storage... 00:06:09.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.767 11:49:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.767 --rc genhtml_branch_coverage=1 00:06:09.767 --rc genhtml_function_coverage=1 00:06:09.767 --rc genhtml_legend=1 00:06:09.767 --rc geninfo_all_blocks=1 00:06:09.767 --rc geninfo_unexecuted_blocks=1 00:06:09.767 00:06:09.767 ' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.767 --rc genhtml_branch_coverage=1 00:06:09.767 --rc genhtml_function_coverage=1 00:06:09.767 --rc genhtml_legend=1 00:06:09.767 --rc geninfo_all_blocks=1 00:06:09.767 --rc geninfo_unexecuted_blocks=1 00:06:09.767 00:06:09.767 ' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.767 --rc genhtml_branch_coverage=1 00:06:09.767 --rc genhtml_function_coverage=1 00:06:09.767 --rc genhtml_legend=1 00:06:09.767 --rc geninfo_all_blocks=1 00:06:09.767 --rc geninfo_unexecuted_blocks=1 00:06:09.767 00:06:09.767 ' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.767 --rc genhtml_branch_coverage=1 00:06:09.767 --rc genhtml_function_coverage=1 00:06:09.767 --rc genhtml_legend=1 00:06:09.767 --rc geninfo_all_blocks=1 00:06:09.767 --rc geninfo_unexecuted_blocks=1 00:06:09.767 00:06:09.767 ' 00:06:09.767 11:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.767 11:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58590 00:06:09.767 11:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.767 11:49:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58590 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58590 ']' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.767 11:49:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.767 [2024-11-27 11:49:59.787746] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:09.767 [2024-11-27 11:49:59.787868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58590 ] 00:06:10.027 [2024-11-27 11:49:59.967489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.287 [2024-11-27 11:50:00.083277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.226 11:50:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.226 11:50:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.226 11:50:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:11.226 11:50:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58590 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58590 ']' 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58590 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58590 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.226 killing process with pid 58590 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58590' 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 58590 00:06:11.226 11:50:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 58590 00:06:13.764 00:06:13.764 real 0m4.084s 00:06:13.764 user 0m4.020s 00:06:13.764 sys 0m0.621s 00:06:13.764 11:50:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.764 11:50:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.764 ************************************ 00:06:13.764 END TEST alias_rpc 00:06:13.764 ************************************ 00:06:13.764 11:50:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:13.764 11:50:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.764 11:50:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.764 11:50:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.764 11:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.764 ************************************ 00:06:13.764 START TEST spdkcli_tcp 00:06:13.764 ************************************ 00:06:13.764 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.764 * Looking for test storage... 00:06:13.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:13.764 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.764 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.764 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.764 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.764 11:50:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.023 11:50:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.024 11:50:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.024 --rc genhtml_branch_coverage=1 00:06:14.024 --rc genhtml_function_coverage=1 00:06:14.024 --rc genhtml_legend=1 00:06:14.024 --rc geninfo_all_blocks=1 00:06:14.024 --rc geninfo_unexecuted_blocks=1 00:06:14.024 00:06:14.024 ' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.024 --rc genhtml_branch_coverage=1 00:06:14.024 --rc genhtml_function_coverage=1 00:06:14.024 --rc genhtml_legend=1 00:06:14.024 --rc geninfo_all_blocks=1 00:06:14.024 --rc geninfo_unexecuted_blocks=1 00:06:14.024 00:06:14.024 ' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.024 --rc genhtml_branch_coverage=1 00:06:14.024 --rc genhtml_function_coverage=1 00:06:14.024 --rc genhtml_legend=1 00:06:14.024 --rc geninfo_all_blocks=1 00:06:14.024 --rc geninfo_unexecuted_blocks=1 00:06:14.024 00:06:14.024 ' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.024 --rc genhtml_branch_coverage=1 00:06:14.024 --rc genhtml_function_coverage=1 00:06:14.024 --rc genhtml_legend=1 00:06:14.024 --rc geninfo_all_blocks=1 00:06:14.024 --rc geninfo_unexecuted_blocks=1 00:06:14.024 00:06:14.024 ' 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58698 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.024 11:50:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58698 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58698 ']' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.024 11:50:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.024 [2024-11-27 11:50:03.945573] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:14.024 [2024-11-27 11:50:03.945699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58698 ] 00:06:14.283 [2024-11-27 11:50:04.128612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.283 [2024-11-27 11:50:04.245733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.283 [2024-11-27 11:50:04.245781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.220 11:50:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.220 11:50:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:15.220 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58715 00:06:15.220 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.220 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.480 [ 00:06:15.480 "bdev_malloc_delete", 00:06:15.480 "bdev_malloc_create", 00:06:15.480 "bdev_null_resize", 00:06:15.480 "bdev_null_delete", 00:06:15.480 "bdev_null_create", 00:06:15.480 "bdev_nvme_cuse_unregister", 00:06:15.480 "bdev_nvme_cuse_register", 00:06:15.480 "bdev_opal_new_user", 00:06:15.480 "bdev_opal_set_lock_state", 00:06:15.480 "bdev_opal_delete", 00:06:15.480 "bdev_opal_get_info", 00:06:15.480 "bdev_opal_create", 00:06:15.480 "bdev_nvme_opal_revert", 00:06:15.480 "bdev_nvme_opal_init", 00:06:15.480 "bdev_nvme_send_cmd", 00:06:15.480 "bdev_nvme_set_keys", 00:06:15.480 "bdev_nvme_get_path_iostat", 00:06:15.480 "bdev_nvme_get_mdns_discovery_info", 00:06:15.480 "bdev_nvme_stop_mdns_discovery", 00:06:15.480 "bdev_nvme_start_mdns_discovery", 00:06:15.480 "bdev_nvme_set_multipath_policy", 00:06:15.480 "bdev_nvme_set_preferred_path", 00:06:15.480 "bdev_nvme_get_io_paths", 00:06:15.480 "bdev_nvme_remove_error_injection", 00:06:15.480 "bdev_nvme_add_error_injection", 00:06:15.480 "bdev_nvme_get_discovery_info", 00:06:15.480 "bdev_nvme_stop_discovery", 00:06:15.480 "bdev_nvme_start_discovery", 00:06:15.480 "bdev_nvme_get_controller_health_info", 00:06:15.480 "bdev_nvme_disable_controller", 00:06:15.480 "bdev_nvme_enable_controller", 00:06:15.480 "bdev_nvme_reset_controller", 00:06:15.480 "bdev_nvme_get_transport_statistics", 00:06:15.480 "bdev_nvme_apply_firmware", 00:06:15.480 "bdev_nvme_detach_controller", 00:06:15.480 "bdev_nvme_get_controllers", 00:06:15.480 "bdev_nvme_attach_controller", 00:06:15.480 "bdev_nvme_set_hotplug", 00:06:15.480 "bdev_nvme_set_options", 00:06:15.480 "bdev_passthru_delete", 00:06:15.480 "bdev_passthru_create", 00:06:15.480 "bdev_lvol_set_parent_bdev", 00:06:15.480 "bdev_lvol_set_parent", 00:06:15.480 "bdev_lvol_check_shallow_copy", 00:06:15.480 "bdev_lvol_start_shallow_copy", 00:06:15.480 "bdev_lvol_grow_lvstore", 00:06:15.480 "bdev_lvol_get_lvols", 00:06:15.480 "bdev_lvol_get_lvstores", 00:06:15.480 "bdev_lvol_delete", 00:06:15.480 "bdev_lvol_set_read_only", 00:06:15.480 "bdev_lvol_resize", 00:06:15.480 "bdev_lvol_decouple_parent", 00:06:15.480 "bdev_lvol_inflate", 00:06:15.480 "bdev_lvol_rename", 00:06:15.480 "bdev_lvol_clone_bdev", 00:06:15.480 "bdev_lvol_clone", 00:06:15.480 "bdev_lvol_snapshot", 00:06:15.480 "bdev_lvol_create", 00:06:15.480 "bdev_lvol_delete_lvstore", 00:06:15.480 "bdev_lvol_rename_lvstore", 00:06:15.480 "bdev_lvol_create_lvstore", 00:06:15.480 "bdev_raid_set_options", 00:06:15.480 "bdev_raid_remove_base_bdev", 00:06:15.480 "bdev_raid_add_base_bdev", 00:06:15.480 "bdev_raid_delete", 00:06:15.480 "bdev_raid_create", 00:06:15.480 "bdev_raid_get_bdevs", 00:06:15.480 "bdev_error_inject_error", 00:06:15.480 "bdev_error_delete", 00:06:15.480 "bdev_error_create", 00:06:15.480 "bdev_split_delete", 00:06:15.480 "bdev_split_create", 00:06:15.480 "bdev_delay_delete", 00:06:15.480 "bdev_delay_create", 00:06:15.480 "bdev_delay_update_latency", 00:06:15.480 "bdev_zone_block_delete", 00:06:15.480 "bdev_zone_block_create", 00:06:15.480 "blobfs_create", 00:06:15.480 "blobfs_detect", 00:06:15.480 "blobfs_set_cache_size", 00:06:15.480 "bdev_xnvme_delete", 00:06:15.480 "bdev_xnvme_create", 00:06:15.480 "bdev_aio_delete", 00:06:15.480 "bdev_aio_rescan", 00:06:15.480 "bdev_aio_create", 00:06:15.480 "bdev_ftl_set_property", 00:06:15.480 "bdev_ftl_get_properties", 00:06:15.480 "bdev_ftl_get_stats", 00:06:15.480 "bdev_ftl_unmap", 00:06:15.480 "bdev_ftl_unload", 00:06:15.480 "bdev_ftl_delete", 00:06:15.480 "bdev_ftl_load", 00:06:15.480 "bdev_ftl_create", 00:06:15.480 "bdev_virtio_attach_controller", 00:06:15.480 "bdev_virtio_scsi_get_devices", 00:06:15.480 "bdev_virtio_detach_controller", 00:06:15.480 "bdev_virtio_blk_set_hotplug", 00:06:15.480 "bdev_iscsi_delete", 00:06:15.480 "bdev_iscsi_create", 00:06:15.480 "bdev_iscsi_set_options", 00:06:15.480 "accel_error_inject_error", 00:06:15.480 "ioat_scan_accel_module", 00:06:15.480 "dsa_scan_accel_module", 00:06:15.480 "iaa_scan_accel_module", 00:06:15.480 "keyring_file_remove_key", 00:06:15.480 "keyring_file_add_key", 00:06:15.480 "keyring_linux_set_options", 00:06:15.480 "fsdev_aio_delete", 00:06:15.480 "fsdev_aio_create", 00:06:15.480 "iscsi_get_histogram", 00:06:15.480 "iscsi_enable_histogram", 00:06:15.480 "iscsi_set_options", 00:06:15.480 "iscsi_get_auth_groups", 00:06:15.480 "iscsi_auth_group_remove_secret", 00:06:15.480 "iscsi_auth_group_add_secret", 00:06:15.480 "iscsi_delete_auth_group", 00:06:15.480 "iscsi_create_auth_group", 00:06:15.480 "iscsi_set_discovery_auth", 00:06:15.480 "iscsi_get_options", 00:06:15.480 "iscsi_target_node_request_logout", 00:06:15.480 "iscsi_target_node_set_redirect", 00:06:15.480 "iscsi_target_node_set_auth", 00:06:15.480 "iscsi_target_node_add_lun", 00:06:15.480 "iscsi_get_stats", 00:06:15.480 "iscsi_get_connections", 00:06:15.480 "iscsi_portal_group_set_auth", 00:06:15.480 "iscsi_start_portal_group", 00:06:15.480 "iscsi_delete_portal_group", 00:06:15.480 "iscsi_create_portal_group", 00:06:15.480 "iscsi_get_portal_groups", 00:06:15.480 "iscsi_delete_target_node", 00:06:15.480 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.480 "iscsi_target_node_add_pg_ig_maps", 00:06:15.480 "iscsi_create_target_node", 00:06:15.480 "iscsi_get_target_nodes", 00:06:15.480 "iscsi_delete_initiator_group", 00:06:15.480 "iscsi_initiator_group_remove_initiators", 00:06:15.480 "iscsi_initiator_group_add_initiators", 00:06:15.480 "iscsi_create_initiator_group", 00:06:15.480 "iscsi_get_initiator_groups", 00:06:15.480 "nvmf_set_crdt", 00:06:15.480 "nvmf_set_config", 00:06:15.480 "nvmf_set_max_subsystems", 00:06:15.480 "nvmf_stop_mdns_prr", 00:06:15.480 "nvmf_publish_mdns_prr", 00:06:15.480 "nvmf_subsystem_get_listeners", 00:06:15.480 "nvmf_subsystem_get_qpairs", 00:06:15.480 "nvmf_subsystem_get_controllers", 00:06:15.480 "nvmf_get_stats", 00:06:15.480 "nvmf_get_transports", 00:06:15.480 "nvmf_create_transport", 00:06:15.480 "nvmf_get_targets", 00:06:15.480 "nvmf_delete_target", 00:06:15.480 "nvmf_create_target", 00:06:15.480 "nvmf_subsystem_allow_any_host", 00:06:15.480 "nvmf_subsystem_set_keys", 00:06:15.480 "nvmf_subsystem_remove_host", 00:06:15.480 "nvmf_subsystem_add_host", 00:06:15.480 "nvmf_ns_remove_host", 00:06:15.480 "nvmf_ns_add_host", 00:06:15.480 "nvmf_subsystem_remove_ns", 00:06:15.480 "nvmf_subsystem_set_ns_ana_group", 00:06:15.480 "nvmf_subsystem_add_ns", 00:06:15.480 "nvmf_subsystem_listener_set_ana_state", 00:06:15.480 "nvmf_discovery_get_referrals", 00:06:15.480 "nvmf_discovery_remove_referral", 00:06:15.480 "nvmf_discovery_add_referral", 00:06:15.480 "nvmf_subsystem_remove_listener", 00:06:15.480 "nvmf_subsystem_add_listener", 00:06:15.480 "nvmf_delete_subsystem", 00:06:15.480 "nvmf_create_subsystem", 00:06:15.480 "nvmf_get_subsystems", 00:06:15.480 "env_dpdk_get_mem_stats", 00:06:15.480 "nbd_get_disks", 00:06:15.480 "nbd_stop_disk", 00:06:15.480 "nbd_start_disk", 00:06:15.480 "ublk_recover_disk", 00:06:15.480 "ublk_get_disks", 00:06:15.480 "ublk_stop_disk", 00:06:15.480 "ublk_start_disk", 00:06:15.480 "ublk_destroy_target", 00:06:15.480 "ublk_create_target", 00:06:15.480 "virtio_blk_create_transport", 00:06:15.481 "virtio_blk_get_transports", 00:06:15.481 "vhost_controller_set_coalescing", 00:06:15.481 "vhost_get_controllers", 00:06:15.481 "vhost_delete_controller", 00:06:15.481 "vhost_create_blk_controller", 00:06:15.481 "vhost_scsi_controller_remove_target", 00:06:15.481 "vhost_scsi_controller_add_target", 00:06:15.481 "vhost_start_scsi_controller", 00:06:15.481 "vhost_create_scsi_controller", 00:06:15.481 "thread_set_cpumask", 00:06:15.481 "scheduler_set_options", 00:06:15.481 "framework_get_governor", 00:06:15.481 "framework_get_scheduler", 00:06:15.481 "framework_set_scheduler", 00:06:15.481 "framework_get_reactors", 00:06:15.481 "thread_get_io_channels", 00:06:15.481 "thread_get_pollers", 00:06:15.481 "thread_get_stats", 00:06:15.481 "framework_monitor_context_switch", 00:06:15.481 "spdk_kill_instance", 00:06:15.481 "log_enable_timestamps", 00:06:15.481 "log_get_flags", 00:06:15.481 "log_clear_flag", 00:06:15.481 "log_set_flag", 00:06:15.481 "log_get_level", 00:06:15.481 "log_set_level", 00:06:15.481 "log_get_print_level", 00:06:15.481 "log_set_print_level", 00:06:15.481 "framework_enable_cpumask_locks", 00:06:15.481 "framework_disable_cpumask_locks", 00:06:15.481 "framework_wait_init", 00:06:15.481 "framework_start_init", 00:06:15.481 "scsi_get_devices", 00:06:15.481 "bdev_get_histogram", 00:06:15.481 "bdev_enable_histogram", 00:06:15.481 "bdev_set_qos_limit", 00:06:15.481 "bdev_set_qd_sampling_period", 00:06:15.481 "bdev_get_bdevs", 00:06:15.481 "bdev_reset_iostat", 00:06:15.481 "bdev_get_iostat", 00:06:15.481 "bdev_examine", 00:06:15.481 "bdev_wait_for_examine", 00:06:15.481 "bdev_set_options", 00:06:15.481 "accel_get_stats", 00:06:15.481 "accel_set_options", 00:06:15.481 "accel_set_driver", 00:06:15.481 "accel_crypto_key_destroy", 00:06:15.481 "accel_crypto_keys_get", 00:06:15.481 "accel_crypto_key_create", 00:06:15.481 "accel_assign_opc", 00:06:15.481 "accel_get_module_info", 00:06:15.481 "accel_get_opc_assignments", 00:06:15.481 "vmd_rescan", 00:06:15.481 "vmd_remove_device", 00:06:15.481 "vmd_enable", 00:06:15.481 "sock_get_default_impl", 00:06:15.481 "sock_set_default_impl", 00:06:15.481 "sock_impl_set_options", 00:06:15.481 "sock_impl_get_options", 00:06:15.481 "iobuf_get_stats", 00:06:15.481 "iobuf_set_options", 00:06:15.481 "keyring_get_keys", 00:06:15.481 "framework_get_pci_devices", 00:06:15.481 "framework_get_config", 00:06:15.481 "framework_get_subsystems", 00:06:15.481 "fsdev_set_opts", 00:06:15.481 "fsdev_get_opts", 00:06:15.481 "trace_get_info", 00:06:15.481 "trace_get_tpoint_group_mask", 00:06:15.481 "trace_disable_tpoint_group", 00:06:15.481 "trace_enable_tpoint_group", 00:06:15.481 "trace_clear_tpoint_mask", 00:06:15.481 "trace_set_tpoint_mask", 00:06:15.481 "notify_get_notifications", 00:06:15.481 "notify_get_types", 00:06:15.481 "spdk_get_version", 00:06:15.481 "rpc_get_methods" 00:06:15.481 ] 00:06:15.481 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.481 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.481 11:50:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58698 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58698 ']' 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58698 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58698 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.481 killing process with pid 58698 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58698' 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58698 00:06:15.481 11:50:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58698 00:06:18.018 00:06:18.018 real 0m4.194s 00:06:18.018 user 0m7.409s 00:06:18.018 sys 0m0.660s 00:06:18.018 11:50:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.018 11:50:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 ************************************ 00:06:18.018 END TEST spdkcli_tcp 00:06:18.018 ************************************ 00:06:18.018 11:50:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.018 11:50:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.018 11:50:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.018 11:50:07 -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 ************************************ 00:06:18.018 START TEST dpdk_mem_utility 00:06:18.018 ************************************ 00:06:18.018 11:50:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.018 * Looking for test storage... 00:06:18.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:18.018 11:50:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.018 11:50:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.018 11:50:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.018 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:18.018 11:50:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.278 11:50:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.278 --rc genhtml_branch_coverage=1 00:06:18.278 --rc genhtml_function_coverage=1 00:06:18.278 --rc genhtml_legend=1 00:06:18.278 --rc geninfo_all_blocks=1 00:06:18.278 --rc geninfo_unexecuted_blocks=1 00:06:18.278 00:06:18.278 ' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.278 --rc genhtml_branch_coverage=1 00:06:18.278 --rc genhtml_function_coverage=1 00:06:18.278 --rc genhtml_legend=1 00:06:18.278 --rc geninfo_all_blocks=1 00:06:18.278 --rc geninfo_unexecuted_blocks=1 00:06:18.278 00:06:18.278 ' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.278 --rc genhtml_branch_coverage=1 00:06:18.278 --rc genhtml_function_coverage=1 00:06:18.278 --rc genhtml_legend=1 00:06:18.278 --rc geninfo_all_blocks=1 00:06:18.278 --rc geninfo_unexecuted_blocks=1 00:06:18.278 00:06:18.278 ' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.278 --rc genhtml_branch_coverage=1 00:06:18.278 --rc genhtml_function_coverage=1 00:06:18.278 --rc genhtml_legend=1 00:06:18.278 --rc geninfo_all_blocks=1 00:06:18.278 --rc geninfo_unexecuted_blocks=1 00:06:18.278 00:06:18.278 ' 00:06:18.278 11:50:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.278 11:50:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58820 00:06:18.278 11:50:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.278 11:50:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58820 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58820 ']' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.278 11:50:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.278 [2024-11-27 11:50:08.191986] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:18.278 [2024-11-27 11:50:08.192120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58820 ] 00:06:18.537 [2024-11-27 11:50:08.374416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.537 [2024-11-27 11:50:08.489741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.477 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.477 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:19.477 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.477 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.477 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.477 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.477 { 00:06:19.477 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.477 } 00:06:19.477 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.477 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.477 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:19.477 1 heaps totaling size 824.000000 MiB 00:06:19.477 size: 824.000000 MiB heap id: 0 00:06:19.477 end heaps---------- 00:06:19.477 9 mempools totaling size 603.782043 MiB 00:06:19.477 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.477 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.477 size: 100.555481 MiB name: bdev_io_58820 00:06:19.477 size: 50.003479 MiB name: msgpool_58820 00:06:19.477 size: 36.509338 MiB name: fsdev_io_58820 00:06:19.477 size: 21.763794 MiB name: PDU_Pool 00:06:19.477 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.477 size: 4.133484 MiB name: evtpool_58820 00:06:19.477 size: 0.026123 MiB name: Session_Pool 00:06:19.477 end mempools------- 00:06:19.477 6 memzones totaling size 4.142822 MiB 00:06:19.477 size: 1.000366 MiB name: RG_ring_0_58820 00:06:19.477 size: 1.000366 MiB name: RG_ring_1_58820 00:06:19.477 size: 1.000366 MiB name: RG_ring_4_58820 00:06:19.477 size: 1.000366 MiB name: RG_ring_5_58820 00:06:19.477 size: 0.125366 MiB name: RG_ring_2_58820 00:06:19.477 size: 0.015991 MiB name: RG_ring_3_58820 00:06:19.477 end memzones------- 00:06:19.477 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.477 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:06:19.477 list of free elements. size: 16.781372 MiB 00:06:19.477 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:19.477 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:19.477 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:19.477 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:19.477 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:19.477 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:19.477 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:19.477 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:19.477 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:19.477 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:19.477 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:19.477 element at address: 0x20001b400000 with size: 0.562683 MiB 00:06:19.477 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:19.477 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:19.477 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:19.477 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:19.477 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:19.477 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:19.477 list of standard malloc elements. size: 199.287720 MiB 00:06:19.477 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:19.477 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:19.477 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:19.477 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:19.477 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:19.477 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:19.477 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:19.477 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:19.477 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:19.477 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:19.477 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:19.477 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:19.477 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:19.478 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:19.479 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:19.479 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:19.479 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:19.479 list of memzone associated elements. size: 607.930908 MiB 00:06:19.479 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:19.479 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.479 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:19.479 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.479 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:19.479 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58820_0 00:06:19.479 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:19.479 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58820_0 00:06:19.479 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:19.479 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58820_0 00:06:19.479 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:19.479 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.479 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:19.479 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.479 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:19.479 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58820_0 00:06:19.479 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:19.479 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58820 00:06:19.479 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:19.479 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58820 00:06:19.479 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:19.479 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.479 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:19.479 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.479 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:19.479 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.479 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:19.480 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.480 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:19.480 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58820 00:06:19.480 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:19.480 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58820 00:06:19.480 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:19.480 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58820 00:06:19.480 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:19.480 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58820 00:06:19.480 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:19.480 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58820 00:06:19.480 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:19.480 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58820 00:06:19.480 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:19.480 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.480 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:19.480 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.480 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:19.480 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.480 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:19.480 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58820 00:06:19.480 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:19.480 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58820 00:06:19.480 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:19.480 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.480 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:19.480 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.480 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:19.480 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58820 00:06:19.480 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:19.480 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.480 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:19.480 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58820 00:06:19.480 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:19.480 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58820 00:06:19.480 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:19.480 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58820 00:06:19.480 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:19.480 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.480 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.480 11:50:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58820 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58820 ']' 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58820 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58820 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.480 killing process with pid 58820 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58820' 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58820 00:06:19.480 11:50:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58820 00:06:22.019 00:06:22.019 real 0m3.982s 00:06:22.019 user 0m3.817s 00:06:22.019 sys 0m0.609s 00:06:22.019 11:50:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.019 ************************************ 00:06:22.019 END TEST dpdk_mem_utility 00:06:22.019 11:50:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.019 ************************************ 00:06:22.019 11:50:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:22.019 11:50:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.019 11:50:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.019 11:50:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.019 ************************************ 00:06:22.019 START TEST event 00:06:22.019 ************************************ 00:06:22.019 11:50:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:22.019 * Looking for test storage... 00:06:22.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.019 11:50:12 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.019 11:50:12 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.019 11:50:12 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.278 11:50:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.278 11:50:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.278 11:50:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.278 11:50:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.278 11:50:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.278 11:50:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.278 11:50:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.278 11:50:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.278 11:50:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.278 11:50:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.278 11:50:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.278 11:50:12 event -- scripts/common.sh@344 -- # case "$op" in 00:06:22.278 11:50:12 event -- scripts/common.sh@345 -- # : 1 00:06:22.278 11:50:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.278 11:50:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.278 11:50:12 event -- scripts/common.sh@365 -- # decimal 1 00:06:22.278 11:50:12 event -- scripts/common.sh@353 -- # local d=1 00:06:22.278 11:50:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.278 11:50:12 event -- scripts/common.sh@355 -- # echo 1 00:06:22.278 11:50:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.278 11:50:12 event -- scripts/common.sh@366 -- # decimal 2 00:06:22.278 11:50:12 event -- scripts/common.sh@353 -- # local d=2 00:06:22.278 11:50:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.278 11:50:12 event -- scripts/common.sh@355 -- # echo 2 00:06:22.278 11:50:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.278 11:50:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.278 11:50:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.278 11:50:12 event -- scripts/common.sh@368 -- # return 0 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.278 --rc genhtml_branch_coverage=1 00:06:22.278 --rc genhtml_function_coverage=1 00:06:22.278 --rc genhtml_legend=1 00:06:22.278 --rc geninfo_all_blocks=1 00:06:22.278 --rc geninfo_unexecuted_blocks=1 00:06:22.278 00:06:22.278 ' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.278 --rc genhtml_branch_coverage=1 00:06:22.278 --rc genhtml_function_coverage=1 00:06:22.278 --rc genhtml_legend=1 00:06:22.278 --rc geninfo_all_blocks=1 00:06:22.278 --rc geninfo_unexecuted_blocks=1 00:06:22.278 00:06:22.278 ' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.278 --rc genhtml_branch_coverage=1 00:06:22.278 --rc genhtml_function_coverage=1 00:06:22.278 --rc genhtml_legend=1 00:06:22.278 --rc geninfo_all_blocks=1 00:06:22.278 --rc geninfo_unexecuted_blocks=1 00:06:22.278 00:06:22.278 ' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.278 --rc genhtml_branch_coverage=1 00:06:22.278 --rc genhtml_function_coverage=1 00:06:22.278 --rc genhtml_legend=1 00:06:22.278 --rc geninfo_all_blocks=1 00:06:22.278 --rc geninfo_unexecuted_blocks=1 00:06:22.278 00:06:22.278 ' 00:06:22.278 11:50:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.278 11:50:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.278 11:50:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:22.278 11:50:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.278 11:50:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.278 ************************************ 00:06:22.278 START TEST event_perf 00:06:22.278 ************************************ 00:06:22.278 11:50:12 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.278 Running I/O for 1 seconds...[2024-11-27 11:50:12.204306] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:22.278 [2024-11-27 11:50:12.204541] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:06:22.538 [2024-11-27 11:50:12.379029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.538 [2024-11-27 11:50:12.506322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.538 [2024-11-27 11:50:12.506490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.538 [2024-11-27 11:50:12.506620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.538 Running I/O for 1 seconds...[2024-11-27 11:50:12.506650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.916 00:06:23.916 lcore 0: 212291 00:06:23.916 lcore 1: 212290 00:06:23.916 lcore 2: 212291 00:06:23.916 lcore 3: 212290 00:06:23.916 done. 00:06:23.916 ************************************ 00:06:23.916 END TEST event_perf 00:06:23.916 ************************************ 00:06:23.916 00:06:23.916 real 0m1.599s 00:06:23.916 user 0m4.363s 00:06:23.916 sys 0m0.115s 00:06:23.916 11:50:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.916 11:50:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.916 11:50:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.916 11:50:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:23.916 11:50:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.916 11:50:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.916 ************************************ 00:06:23.916 START TEST event_reactor 00:06:23.916 ************************************ 00:06:23.916 11:50:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.916 [2024-11-27 11:50:13.865367] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:23.916 [2024-11-27 11:50:13.865486] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:06:24.175 [2024-11-27 11:50:14.043688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.175 [2024-11-27 11:50:14.162656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.552 test_start 00:06:25.552 oneshot 00:06:25.552 tick 100 00:06:25.552 tick 100 00:06:25.552 tick 250 00:06:25.552 tick 100 00:06:25.552 tick 100 00:06:25.552 tick 100 00:06:25.552 tick 250 00:06:25.552 tick 500 00:06:25.552 tick 100 00:06:25.552 tick 100 00:06:25.552 tick 250 00:06:25.553 tick 100 00:06:25.553 tick 100 00:06:25.553 test_end 00:06:25.553 00:06:25.553 real 0m1.576s 00:06:25.553 user 0m1.350s 00:06:25.553 sys 0m0.117s 00:06:25.553 11:50:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.553 ************************************ 00:06:25.553 END TEST event_reactor 00:06:25.553 ************************************ 00:06:25.553 11:50:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:25.553 11:50:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.553 11:50:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:25.553 11:50:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.553 11:50:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.553 ************************************ 00:06:25.553 START TEST event_reactor_perf 00:06:25.553 ************************************ 00:06:25.553 11:50:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.553 [2024-11-27 11:50:15.509240] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:25.553 [2024-11-27 11:50:15.509510] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:06:25.811 [2024-11-27 11:50:15.684612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.811 [2024-11-27 11:50:15.805036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.187 test_start 00:06:27.187 test_end 00:06:27.187 Performance: 382547 events per second 00:06:27.187 00:06:27.187 real 0m1.577s 00:06:27.187 user 0m1.358s 00:06:27.187 sys 0m0.110s 00:06:27.187 11:50:17 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.187 11:50:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.187 ************************************ 00:06:27.187 END TEST event_reactor_perf 00:06:27.187 ************************************ 00:06:27.187 11:50:17 event -- event/event.sh@49 -- # uname -s 00:06:27.187 11:50:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.187 11:50:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.187 11:50:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.187 11:50:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.187 11:50:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.187 ************************************ 00:06:27.187 START TEST event_scheduler 00:06:27.187 ************************************ 00:06:27.187 11:50:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.187 * Looking for test storage... 00:06:27.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:27.187 11:50:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.187 11:50:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.187 11:50:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.446 11:50:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.446 11:50:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.447 11:50:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.447 --rc genhtml_branch_coverage=1 00:06:27.447 --rc genhtml_function_coverage=1 00:06:27.447 --rc genhtml_legend=1 00:06:27.447 --rc geninfo_all_blocks=1 00:06:27.447 --rc geninfo_unexecuted_blocks=1 00:06:27.447 00:06:27.447 ' 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.447 --rc genhtml_branch_coverage=1 00:06:27.447 --rc genhtml_function_coverage=1 00:06:27.447 --rc genhtml_legend=1 00:06:27.447 --rc geninfo_all_blocks=1 00:06:27.447 --rc geninfo_unexecuted_blocks=1 00:06:27.447 00:06:27.447 ' 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.447 --rc genhtml_branch_coverage=1 00:06:27.447 --rc genhtml_function_coverage=1 00:06:27.447 --rc genhtml_legend=1 00:06:27.447 --rc geninfo_all_blocks=1 00:06:27.447 --rc geninfo_unexecuted_blocks=1 00:06:27.447 00:06:27.447 ' 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.447 --rc genhtml_branch_coverage=1 00:06:27.447 --rc genhtml_function_coverage=1 00:06:27.447 --rc genhtml_legend=1 00:06:27.447 --rc geninfo_all_blocks=1 00:06:27.447 --rc geninfo_unexecuted_blocks=1 00:06:27.447 00:06:27.447 ' 00:06:27.447 11:50:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.447 11:50:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59080 00:06:27.447 11:50:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.447 11:50:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.447 11:50:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59080 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59080 ']' 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.447 11:50:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.447 [2024-11-27 11:50:17.419937] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:27.447 [2024-11-27 11:50:17.420294] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:06:27.707 [2024-11-27 11:50:17.601899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.707 [2024-11-27 11:50:17.729329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.707 [2024-11-27 11:50:17.729496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.707 [2024-11-27 11:50:17.729631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.707 [2024-11-27 11:50:17.729773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:28.276 11:50:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.276 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.276 POWER: Cannot set governor of lcore 0 to performance 00:06:28.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.276 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.276 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.276 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:28.276 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:28.276 POWER: Unable to set Power Management Environment for lcore 0 00:06:28.276 [2024-11-27 11:50:18.319049] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:28.276 [2024-11-27 11:50:18.319076] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:28.276 [2024-11-27 11:50:18.319089] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:28.276 [2024-11-27 11:50:18.319111] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.276 [2024-11-27 11:50:18.319121] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.276 [2024-11-27 11:50:18.319134] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.276 11:50:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.276 11:50:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 [2024-11-27 11:50:18.655516] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.846 11:50:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.846 11:50:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.846 11:50:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 ************************************ 00:06:28.846 START TEST scheduler_create_thread 00:06:28.846 ************************************ 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 2 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 3 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 4 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 5 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 6 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 7 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 8 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 9 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.846 10 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.846 11:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.294 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.294 11:50:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.294 11:50:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.294 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.294 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.232 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.232 11:50:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.232 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.232 11:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.800 11:50:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.800 11:50:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:31.800 11:50:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:31.800 11:50:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.800 11:50:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.739 ************************************ 00:06:32.739 END TEST scheduler_create_thread 00:06:32.739 ************************************ 00:06:32.739 11:50:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.739 00:06:32.739 real 0m3.882s 00:06:32.739 user 0m0.026s 00:06:32.739 sys 0m0.007s 00:06:32.739 11:50:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.739 11:50:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.739 11:50:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:32.739 11:50:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59080 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59080 ']' 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59080 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59080 00:06:32.739 killing process with pid 59080 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59080' 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59080 00:06:32.739 11:50:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59080 00:06:32.999 [2024-11-27 11:50:22.933942] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.378 00:06:34.378 real 0m6.998s 00:06:34.378 user 0m14.542s 00:06:34.378 sys 0m0.529s 00:06:34.378 11:50:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.378 11:50:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.378 ************************************ 00:06:34.378 END TEST event_scheduler 00:06:34.378 ************************************ 00:06:34.378 11:50:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.378 11:50:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.378 11:50:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.378 11:50:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.378 11:50:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.378 ************************************ 00:06:34.378 START TEST app_repeat 00:06:34.378 ************************************ 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59202 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:34.378 Process app_repeat pid: 59202 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59202' 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:34.378 spdk_app_start Round 0 00:06:34.378 11:50:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59202 /var/tmp/spdk-nbd.sock 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59202 ']' 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.378 11:50:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.378 [2024-11-27 11:50:24.234136] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:34.378 [2024-11-27 11:50:24.234257] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59202 ] 00:06:34.378 [2024-11-27 11:50:24.415251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.637 [2024-11-27 11:50:24.537534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.637 [2024-11-27 11:50:24.537566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.205 11:50:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.205 11:50:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.205 11:50:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.463 Malloc0 00:06:35.463 11:50:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.722 Malloc1 00:06:35.722 11:50:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.722 11:50:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.981 /dev/nbd0 00:06:35.981 11:50:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.981 11:50:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.981 1+0 records in 00:06:35.981 1+0 records out 00:06:35.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281049 s, 14.6 MB/s 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.981 11:50:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.981 11:50:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.981 11:50:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.981 11:50:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.240 /dev/nbd1 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.240 1+0 records in 00:06:36.240 1+0 records out 00:06:36.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280398 s, 14.6 MB/s 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.240 11:50:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.240 11:50:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.499 { 00:06:36.499 "nbd_device": "/dev/nbd0", 00:06:36.499 "bdev_name": "Malloc0" 00:06:36.499 }, 00:06:36.499 { 00:06:36.499 "nbd_device": "/dev/nbd1", 00:06:36.499 "bdev_name": "Malloc1" 00:06:36.499 } 00:06:36.499 ]' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.499 { 00:06:36.499 "nbd_device": "/dev/nbd0", 00:06:36.499 "bdev_name": "Malloc0" 00:06:36.499 }, 00:06:36.499 { 00:06:36.499 "nbd_device": "/dev/nbd1", 00:06:36.499 "bdev_name": "Malloc1" 00:06:36.499 } 00:06:36.499 ]' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.499 /dev/nbd1' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.499 /dev/nbd1' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.499 256+0 records in 00:06:36.499 256+0 records out 00:06:36.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119203 s, 88.0 MB/s 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.499 256+0 records in 00:06:36.499 256+0 records out 00:06:36.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361147 s, 29.0 MB/s 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.499 256+0 records in 00:06:36.499 256+0 records out 00:06:36.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355582 s, 29.5 MB/s 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.499 11:50:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.757 11:50:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.015 11:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.273 11:50:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.273 11:50:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.839 11:50:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.213 [2024-11-27 11:50:28.861131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.213 [2024-11-27 11:50:28.977350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.213 [2024-11-27 11:50:28.977350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.213 [2024-11-27 11:50:29.171695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.213 [2024-11-27 11:50:29.171782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.115 11:50:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.115 spdk_app_start Round 1 00:06:41.115 11:50:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:41.115 11:50:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59202 /var/tmp/spdk-nbd.sock 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59202 ']' 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.115 11:50:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:41.115 11:50:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.115 Malloc0 00:06:41.374 11:50:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.374 Malloc1 00:06:41.634 11:50:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.634 /dev/nbd0 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.634 11:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.634 1+0 records in 00:06:41.634 1+0 records out 00:06:41.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235028 s, 17.4 MB/s 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.634 11:50:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.893 /dev/nbd1 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.893 1+0 records in 00:06:41.893 1+0 records out 00:06:41.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397664 s, 10.3 MB/s 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.893 11:50:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.893 11:50:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.152 11:50:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.152 { 00:06:42.152 "nbd_device": "/dev/nbd0", 00:06:42.152 "bdev_name": "Malloc0" 00:06:42.152 }, 00:06:42.152 { 00:06:42.152 "nbd_device": "/dev/nbd1", 00:06:42.152 "bdev_name": "Malloc1" 00:06:42.152 } 00:06:42.152 ]' 00:06:42.152 11:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.152 { 00:06:42.152 "nbd_device": "/dev/nbd0", 00:06:42.152 "bdev_name": "Malloc0" 00:06:42.152 }, 00:06:42.152 { 00:06:42.152 "nbd_device": "/dev/nbd1", 00:06:42.152 "bdev_name": "Malloc1" 00:06:42.152 } 00:06:42.152 ]' 00:06:42.152 11:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.411 /dev/nbd1' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.411 /dev/nbd1' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.411 256+0 records in 00:06:42.411 256+0 records out 00:06:42.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124047 s, 84.5 MB/s 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.411 256+0 records in 00:06:42.411 256+0 records out 00:06:42.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276639 s, 37.9 MB/s 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.411 256+0 records in 00:06:42.411 256+0 records out 00:06:42.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328885 s, 31.9 MB/s 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.411 11:50:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.670 11:50:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.929 11:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.929 11:50:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.929 11:50:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.929 11:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.930 11:50:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.189 11:50:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.189 11:50:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.448 11:50:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.878 [2024-11-27 11:50:34.617998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.878 [2024-11-27 11:50:34.729229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.878 [2024-11-27 11:50:34.729245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.878 [2024-11-27 11:50:34.921155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.878 [2024-11-27 11:50:34.921223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.783 spdk_app_start Round 2 00:06:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.783 11:50:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.783 11:50:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.783 11:50:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59202 /var/tmp/spdk-nbd.sock 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59202 ']' 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.783 11:50:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.783 11:50:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.042 Malloc0 00:06:47.042 11:50:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.302 Malloc1 00:06:47.302 11:50:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.302 11:50:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.561 /dev/nbd0 00:06:47.561 11:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.561 11:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.561 11:50:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.562 1+0 records in 00:06:47.562 1+0 records out 00:06:47.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206156 s, 19.9 MB/s 00:06:47.562 11:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.562 11:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.562 11:50:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.562 11:50:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.562 11:50:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.562 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.562 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.562 11:50:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.821 /dev/nbd1 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.821 1+0 records in 00:06:47.821 1+0 records out 00:06:47.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313238 s, 13.1 MB/s 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.821 11:50:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.821 11:50:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.081 { 00:06:48.081 "nbd_device": "/dev/nbd0", 00:06:48.081 "bdev_name": "Malloc0" 00:06:48.081 }, 00:06:48.081 { 00:06:48.081 "nbd_device": "/dev/nbd1", 00:06:48.081 "bdev_name": "Malloc1" 00:06:48.081 } 00:06:48.081 ]' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.081 { 00:06:48.081 "nbd_device": "/dev/nbd0", 00:06:48.081 "bdev_name": "Malloc0" 00:06:48.081 }, 00:06:48.081 { 00:06:48.081 "nbd_device": "/dev/nbd1", 00:06:48.081 "bdev_name": "Malloc1" 00:06:48.081 } 00:06:48.081 ]' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.081 /dev/nbd1' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.081 /dev/nbd1' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.081 11:50:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:48.081 256+0 records in 00:06:48.081 256+0 records out 00:06:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128588 s, 81.5 MB/s 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.081 256+0 records in 00:06:48.081 256+0 records out 00:06:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269807 s, 38.9 MB/s 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.081 256+0 records in 00:06:48.081 256+0 records out 00:06:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280113 s, 37.4 MB/s 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.081 11:50:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.340 11:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.341 11:50:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.600 11:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.860 11:50:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.860 11:50:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.428 11:50:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.365 [2024-11-27 11:50:40.335566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.624 [2024-11-27 11:50:40.438278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.624 [2024-11-27 11:50:40.438278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.624 [2024-11-27 11:50:40.629829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.624 [2024-11-27 11:50:40.629910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.532 11:50:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59202 /var/tmp/spdk-nbd.sock 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59202 ']' 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.532 11:50:42 event.app_repeat -- event/event.sh@39 -- # killprocess 59202 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59202 ']' 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59202 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59202 00:06:52.532 killing process with pid 59202 00:06:52.532 11:50:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.533 11:50:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.533 11:50:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59202' 00:06:52.533 11:50:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59202 00:06:52.533 11:50:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59202 00:06:53.470 spdk_app_start is called in Round 0. 00:06:53.470 Shutdown signal received, stop current app iteration 00:06:53.470 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:53.470 spdk_app_start is called in Round 1. 00:06:53.470 Shutdown signal received, stop current app iteration 00:06:53.470 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:53.470 spdk_app_start is called in Round 2. 00:06:53.470 Shutdown signal received, stop current app iteration 00:06:53.470 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:06:53.470 spdk_app_start is called in Round 3. 00:06:53.470 Shutdown signal received, stop current app iteration 00:06:53.470 11:50:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:53.470 11:50:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:53.470 00:06:53.470 real 0m19.331s 00:06:53.470 user 0m41.140s 00:06:53.470 sys 0m3.083s 00:06:53.470 11:50:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.470 11:50:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.470 ************************************ 00:06:53.470 END TEST app_repeat 00:06:53.470 ************************************ 00:06:53.729 11:50:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:53.729 11:50:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:53.729 11:50:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.729 11:50:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.729 11:50:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.729 ************************************ 00:06:53.729 START TEST cpu_locks 00:06:53.729 ************************************ 00:06:53.729 11:50:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:53.729 * Looking for test storage... 00:06:53.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:53.729 11:50:43 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.729 11:50:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.729 11:50:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.729 11:50:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.729 11:50:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:53.989 11:50:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.989 11:50:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.989 11:50:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.989 11:50:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.989 --rc genhtml_branch_coverage=1 00:06:53.989 --rc genhtml_function_coverage=1 00:06:53.989 --rc genhtml_legend=1 00:06:53.989 --rc geninfo_all_blocks=1 00:06:53.989 --rc geninfo_unexecuted_blocks=1 00:06:53.989 00:06:53.989 ' 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.989 --rc genhtml_branch_coverage=1 00:06:53.989 --rc genhtml_function_coverage=1 00:06:53.989 --rc genhtml_legend=1 00:06:53.989 --rc geninfo_all_blocks=1 00:06:53.989 --rc geninfo_unexecuted_blocks=1 00:06:53.989 00:06:53.989 ' 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.989 --rc genhtml_branch_coverage=1 00:06:53.989 --rc genhtml_function_coverage=1 00:06:53.989 --rc genhtml_legend=1 00:06:53.989 --rc geninfo_all_blocks=1 00:06:53.989 --rc geninfo_unexecuted_blocks=1 00:06:53.989 00:06:53.989 ' 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.989 --rc genhtml_branch_coverage=1 00:06:53.989 --rc genhtml_function_coverage=1 00:06:53.989 --rc genhtml_legend=1 00:06:53.989 --rc geninfo_all_blocks=1 00:06:53.989 --rc geninfo_unexecuted_blocks=1 00:06:53.989 00:06:53.989 ' 00:06:53.989 11:50:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:53.989 11:50:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:53.989 11:50:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:53.989 11:50:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.989 11:50:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.989 ************************************ 00:06:53.989 START TEST default_locks 00:06:53.989 ************************************ 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59645 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59645 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59645 ']' 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.989 11:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.989 [2024-11-27 11:50:43.901903] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:53.989 [2024-11-27 11:50:43.902025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59645 ] 00:06:54.250 [2024-11-27 11:50:44.084278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.250 [2024-11-27 11:50:44.200815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.188 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.188 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:55.188 11:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59645 00:06:55.188 11:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59645 00:06:55.188 11:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59645 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59645 ']' 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59645 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.448 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59645 00:06:55.707 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.707 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.707 killing process with pid 59645 00:06:55.707 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59645' 00:06:55.707 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59645 00:06:55.707 11:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59645 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59645 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59645 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59645 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59645 ']' 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59645) - No such process 00:06:58.243 ERROR: process (pid: 59645) is no longer running 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.243 00:06:58.243 real 0m4.071s 00:06:58.243 user 0m4.016s 00:06:58.243 sys 0m0.668s 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.243 ************************************ 00:06:58.243 11:50:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 END TEST default_locks 00:06:58.243 ************************************ 00:06:58.243 11:50:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.243 11:50:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.243 11:50:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.243 11:50:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 ************************************ 00:06:58.243 START TEST default_locks_via_rpc 00:06:58.243 ************************************ 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59720 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59720 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59720 ']' 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.243 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.244 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.244 11:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.244 [2024-11-27 11:50:48.046380] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:06:58.244 [2024-11-27 11:50:48.046500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59720 ] 00:06:58.244 [2024-11-27 11:50:48.227032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.503 [2024-11-27 11:50:48.338261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59720 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59720 00:06:59.441 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59720 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59720 ']' 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59720 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59720 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.699 killing process with pid 59720 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59720' 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59720 00:06:59.699 11:50:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59720 00:07:02.297 00:07:02.297 real 0m4.083s 00:07:02.297 user 0m4.005s 00:07:02.297 sys 0m0.698s 00:07:02.297 ************************************ 00:07:02.297 END TEST default_locks_via_rpc 00:07:02.297 ************************************ 00:07:02.297 11:50:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.297 11:50:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.297 11:50:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:02.297 11:50:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.297 11:50:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.297 11:50:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.297 ************************************ 00:07:02.297 START TEST non_locking_app_on_locked_coremask 00:07:02.297 ************************************ 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59796 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59796 /var/tmp/spdk.sock 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59796 ']' 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.297 11:50:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.297 [2024-11-27 11:50:52.203418] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:02.297 [2024-11-27 11:50:52.203533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59796 ] 00:07:02.557 [2024-11-27 11:50:52.382374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.557 [2024-11-27 11:50:52.495352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59818 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59818 /var/tmp/spdk2.sock 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59818 ']' 00:07:03.498 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.499 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.499 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.499 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.499 11:50:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.499 [2024-11-27 11:50:53.442994] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:03.499 [2024-11-27 11:50:53.443136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59818 ] 00:07:03.758 [2024-11-27 11:50:53.626645] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.758 [2024-11-27 11:50:53.626694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.018 [2024-11-27 11:50:53.854496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.558 11:50:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.558 11:50:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:06.558 11:50:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59796 00:07:06.558 11:50:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59796 00:07:06.558 11:50:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59796 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59796 ']' 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59796 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59796 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.818 killing process with pid 59796 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59796' 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59796 00:07:06.818 11:50:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59796 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59818 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59818 ']' 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59818 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59818 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.095 killing process with pid 59818 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59818' 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59818 00:07:12.095 11:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59818 00:07:14.003 00:07:14.003 real 0m11.762s 00:07:14.003 user 0m11.974s 00:07:14.003 sys 0m1.428s 00:07:14.003 11:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.003 11:51:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.003 ************************************ 00:07:14.003 END TEST non_locking_app_on_locked_coremask 00:07:14.003 ************************************ 00:07:14.003 11:51:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:14.003 11:51:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.003 11:51:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.003 11:51:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.003 ************************************ 00:07:14.003 START TEST locking_app_on_unlocked_coremask 00:07:14.003 ************************************ 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59967 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59967 /var/tmp/spdk.sock 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.003 11:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.003 [2024-11-27 11:51:04.040024] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:14.003 [2024-11-27 11:51:04.040151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:07:14.263 [2024-11-27 11:51:04.221362] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.263 [2024-11-27 11:51:04.221414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.522 [2024-11-27 11:51:04.334217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.462 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59985 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59985 /var/tmp/spdk2.sock 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59985 ']' 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.463 11:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.463 [2024-11-27 11:51:05.301711] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:15.463 [2024-11-27 11:51:05.301833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:07:15.463 [2024-11-27 11:51:05.485526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.723 [2024-11-27 11:51:05.716521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.279 11:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.279 11:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.279 11:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59985 00:07:18.279 11:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59985 00:07:18.279 11:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59967 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59967 ']' 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59967 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59967 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.847 killing process with pid 59967 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59967' 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59967 00:07:18.847 11:51:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59967 00:07:24.118 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59985 00:07:24.118 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59985 ']' 00:07:24.118 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59985 00:07:24.118 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.118 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59985 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.119 killing process with pid 59985 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59985' 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59985 00:07:24.119 11:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59985 00:07:26.047 ************************************ 00:07:26.047 END TEST locking_app_on_unlocked_coremask 00:07:26.047 ************************************ 00:07:26.047 00:07:26.047 real 0m12.028s 00:07:26.047 user 0m12.332s 00:07:26.047 sys 0m1.427s 00:07:26.047 11:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.047 11:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.047 11:51:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:26.047 11:51:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.047 11:51:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.047 11:51:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.047 ************************************ 00:07:26.047 START TEST locking_app_on_locked_coremask 00:07:26.047 ************************************ 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60144 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60144 /var/tmp/spdk.sock 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60144 ']' 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.047 11:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.305 [2024-11-27 11:51:16.135935] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:26.305 [2024-11-27 11:51:16.136071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:07:26.305 [2024-11-27 11:51:16.316718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.563 [2024-11-27 11:51:16.425801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60160 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60160 /var/tmp/spdk2.sock 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60160 /var/tmp/spdk2.sock 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60160 /var/tmp/spdk2.sock 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60160 ']' 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.498 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.498 [2024-11-27 11:51:17.393710] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:27.498 [2024-11-27 11:51:17.393888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:07:27.756 [2024-11-27 11:51:17.585008] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60144 has claimed it. 00:07:27.756 [2024-11-27 11:51:17.585076] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.015 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60160) - No such process 00:07:28.015 ERROR: process (pid: 60160) is no longer running 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60144 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60144 00:07:28.015 11:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60144 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60144 ']' 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60144 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60144 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.583 killing process with pid 60144 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60144' 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60144 00:07:28.583 11:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60144 00:07:31.119 00:07:31.119 real 0m4.832s 00:07:31.119 user 0m5.047s 00:07:31.119 sys 0m0.855s 00:07:31.119 11:51:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.119 11:51:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.119 ************************************ 00:07:31.119 END TEST locking_app_on_locked_coremask 00:07:31.119 ************************************ 00:07:31.119 11:51:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:31.119 11:51:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.119 11:51:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.119 11:51:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.119 ************************************ 00:07:31.119 START TEST locking_overlapped_coremask 00:07:31.120 ************************************ 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60224 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60224 /var/tmp/spdk.sock 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60224 ']' 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.120 11:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.120 [2024-11-27 11:51:21.042415] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:31.120 [2024-11-27 11:51:21.042526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60224 ] 00:07:31.378 [2024-11-27 11:51:21.226286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.378 [2024-11-27 11:51:21.353893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.378 [2024-11-27 11:51:21.353986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.378 [2024-11-27 11:51:21.353947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60248 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60248 /var/tmp/spdk2.sock 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60248 /var/tmp/spdk2.sock 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60248 /var/tmp/spdk2.sock 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60248 ']' 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.312 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.312 [2024-11-27 11:51:22.360897] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:32.312 [2024-11-27 11:51:22.361021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60248 ] 00:07:32.571 [2024-11-27 11:51:22.546815] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60224 has claimed it. 00:07:32.571 [2024-11-27 11:51:22.546889] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60248) - No such process 00:07:33.225 ERROR: process (pid: 60248) is no longer running 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60224 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60224 ']' 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60224 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.225 11:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60224 00:07:33.225 11:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.225 11:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.225 killing process with pid 60224 00:07:33.225 11:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60224' 00:07:33.225 11:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60224 00:07:33.225 11:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60224 00:07:35.766 00:07:35.766 real 0m4.512s 00:07:35.766 user 0m12.232s 00:07:35.766 sys 0m0.641s 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.766 ************************************ 00:07:35.766 END TEST locking_overlapped_coremask 00:07:35.766 ************************************ 00:07:35.766 11:51:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:35.766 11:51:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.766 11:51:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.766 11:51:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.766 ************************************ 00:07:35.766 START TEST locking_overlapped_coremask_via_rpc 00:07:35.766 ************************************ 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60316 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60316 /var/tmp/spdk.sock 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60316 ']' 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.766 11:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.766 [2024-11-27 11:51:25.629812] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:35.766 [2024-11-27 11:51:25.629938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:07:35.766 [2024-11-27 11:51:25.811598] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.766 [2024-11-27 11:51:25.811649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.025 [2024-11-27 11:51:25.934211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.025 [2024-11-27 11:51:25.934354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.025 [2024-11-27 11:51:25.934414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60335 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60335 /var/tmp/spdk2.sock 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60335 ']' 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.963 11:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.963 [2024-11-27 11:51:26.895941] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:36.963 [2024-11-27 11:51:26.896059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60335 ] 00:07:37.223 [2024-11-27 11:51:27.079216] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:37.223 [2024-11-27 11:51:27.079264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.482 [2024-11-27 11:51:27.328791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.482 [2024-11-27 11:51:27.332541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.482 [2024-11-27 11:51:27.332574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.388 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.388 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.389 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.389 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.648 [2024-11-27 11:51:29.461704] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60316 has claimed it. 00:07:39.648 request: 00:07:39.648 { 00:07:39.648 "method": "framework_enable_cpumask_locks", 00:07:39.648 "req_id": 1 00:07:39.648 } 00:07:39.648 Got JSON-RPC error response 00:07:39.648 response: 00:07:39.648 { 00:07:39.648 "code": -32603, 00:07:39.648 "message": "Failed to claim CPU core: 2" 00:07:39.648 } 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60316 /var/tmp/spdk.sock 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60316 ']' 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60335 /var/tmp/spdk2.sock 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60335 ']' 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.648 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:39.907 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:39.907 00:07:39.907 real 0m4.399s 00:07:39.907 user 0m1.248s 00:07:39.908 sys 0m0.239s 00:07:39.908 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.908 ************************************ 00:07:39.908 END TEST locking_overlapped_coremask_via_rpc 00:07:39.908 11:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 ************************************ 00:07:40.166 11:51:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.166 11:51:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60316 ]] 00:07:40.166 11:51:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60316 00:07:40.166 11:51:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60316 ']' 00:07:40.166 11:51:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60316 00:07:40.166 11:51:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.166 11:51:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.166 11:51:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60316 00:07:40.166 11:51:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.166 killing process with pid 60316 00:07:40.166 11:51:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.166 11:51:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60316' 00:07:40.166 11:51:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60316 00:07:40.166 11:51:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60316 00:07:42.703 11:51:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60335 ]] 00:07:42.703 11:51:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60335 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60335 ']' 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60335 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60335 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:42.703 killing process with pid 60335 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60335' 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60335 00:07:42.703 11:51:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60335 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60316 ]] 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60316 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60316 ']' 00:07:45.239 Process with pid 60316 is not found 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60316 00:07:45.239 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60316) - No such process 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60316 is not found' 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60335 ]] 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60335 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60335 ']' 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60335 00:07:45.239 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60335) - No such process 00:07:45.239 Process with pid 60335 is not found 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60335 is not found' 00:07:45.239 11:51:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.239 00:07:45.239 real 0m51.631s 00:07:45.239 user 1m27.702s 00:07:45.239 sys 0m7.327s 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.239 11:51:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.239 ************************************ 00:07:45.239 END TEST cpu_locks 00:07:45.239 ************************************ 00:07:45.239 00:07:45.239 real 1m23.349s 00:07:45.239 user 2m30.700s 00:07:45.239 sys 0m11.670s 00:07:45.239 11:51:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.239 ************************************ 00:07:45.239 11:51:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.239 END TEST event 00:07:45.239 ************************************ 00:07:45.499 11:51:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.499 11:51:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.499 11:51:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.499 11:51:35 -- common/autotest_common.sh@10 -- # set +x 00:07:45.499 ************************************ 00:07:45.499 START TEST thread 00:07:45.499 ************************************ 00:07:45.499 11:51:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.499 * Looking for test storage... 00:07:45.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:45.499 11:51:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.499 11:51:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.499 11:51:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.499 11:51:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.499 11:51:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.499 11:51:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.499 11:51:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.499 11:51:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.499 11:51:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.499 11:51:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.499 11:51:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.499 11:51:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.499 11:51:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.499 11:51:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.499 11:51:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.499 11:51:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:45.499 11:51:35 thread -- scripts/common.sh@345 -- # : 1 00:07:45.499 11:51:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.499 11:51:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.758 11:51:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:45.758 11:51:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:45.758 11:51:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.758 11:51:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:45.758 11:51:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.758 11:51:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:45.758 11:51:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:45.758 11:51:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.758 11:51:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:45.758 11:51:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.758 11:51:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.758 11:51:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.758 11:51:35 thread -- scripts/common.sh@368 -- # return 0 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.758 --rc genhtml_branch_coverage=1 00:07:45.758 --rc genhtml_function_coverage=1 00:07:45.758 --rc genhtml_legend=1 00:07:45.758 --rc geninfo_all_blocks=1 00:07:45.758 --rc geninfo_unexecuted_blocks=1 00:07:45.758 00:07:45.758 ' 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.758 --rc genhtml_branch_coverage=1 00:07:45.758 --rc genhtml_function_coverage=1 00:07:45.758 --rc genhtml_legend=1 00:07:45.758 --rc geninfo_all_blocks=1 00:07:45.758 --rc geninfo_unexecuted_blocks=1 00:07:45.758 00:07:45.758 ' 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.758 --rc genhtml_branch_coverage=1 00:07:45.758 --rc genhtml_function_coverage=1 00:07:45.758 --rc genhtml_legend=1 00:07:45.758 --rc geninfo_all_blocks=1 00:07:45.758 --rc geninfo_unexecuted_blocks=1 00:07:45.758 00:07:45.758 ' 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.758 --rc genhtml_branch_coverage=1 00:07:45.758 --rc genhtml_function_coverage=1 00:07:45.758 --rc genhtml_legend=1 00:07:45.758 --rc geninfo_all_blocks=1 00:07:45.758 --rc geninfo_unexecuted_blocks=1 00:07:45.758 00:07:45.758 ' 00:07:45.758 11:51:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.758 11:51:35 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.758 ************************************ 00:07:45.758 START TEST thread_poller_perf 00:07:45.758 ************************************ 00:07:45.758 11:51:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.758 [2024-11-27 11:51:35.631742] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:45.758 [2024-11-27 11:51:35.632021] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60536 ] 00:07:46.016 [2024-11-27 11:51:35.813979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.016 [2024-11-27 11:51:35.921173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.016 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:47.392 [2024-11-27T11:51:37.445Z] ====================================== 00:07:47.392 [2024-11-27T11:51:37.445Z] busy:2501276688 (cyc) 00:07:47.392 [2024-11-27T11:51:37.445Z] total_run_count: 411000 00:07:47.392 [2024-11-27T11:51:37.445Z] tsc_hz: 2490000000 (cyc) 00:07:47.392 [2024-11-27T11:51:37.445Z] ====================================== 00:07:47.392 [2024-11-27T11:51:37.445Z] poller_cost: 6085 (cyc), 2443 (nsec) 00:07:47.392 00:07:47.392 real 0m1.565s 00:07:47.392 user 0m1.356s 00:07:47.392 sys 0m0.102s 00:07:47.392 ************************************ 00:07:47.392 END TEST thread_poller_perf 00:07:47.392 ************************************ 00:07:47.392 11:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.392 11:51:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.392 11:51:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.392 11:51:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:47.392 11:51:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.392 11:51:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.392 ************************************ 00:07:47.392 START TEST thread_poller_perf 00:07:47.392 ************************************ 00:07:47.392 11:51:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.392 [2024-11-27 11:51:37.272759] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:47.392 [2024-11-27 11:51:37.272872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60571 ] 00:07:47.650 [2024-11-27 11:51:37.449602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.650 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:47.650 [2024-11-27 11:51:37.561635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.062 [2024-11-27T11:51:39.115Z] ====================================== 00:07:49.062 [2024-11-27T11:51:39.115Z] busy:2493959828 (cyc) 00:07:49.062 [2024-11-27T11:51:39.115Z] total_run_count: 5303000 00:07:49.062 [2024-11-27T11:51:39.115Z] tsc_hz: 2490000000 (cyc) 00:07:49.062 [2024-11-27T11:51:39.115Z] ====================================== 00:07:49.062 [2024-11-27T11:51:39.115Z] poller_cost: 470 (cyc), 188 (nsec) 00:07:49.062 00:07:49.063 real 0m1.570s 00:07:49.063 user 0m1.355s 00:07:49.063 sys 0m0.108s 00:07:49.063 ************************************ 00:07:49.063 END TEST thread_poller_perf 00:07:49.063 ************************************ 00:07:49.063 11:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.063 11:51:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.063 11:51:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:49.063 00:07:49.063 real 0m3.521s 00:07:49.063 user 0m2.881s 00:07:49.063 sys 0m0.430s 00:07:49.063 ************************************ 00:07:49.063 END TEST thread 00:07:49.063 ************************************ 00:07:49.063 11:51:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.063 11:51:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.063 11:51:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:49.063 11:51:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.063 11:51:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.063 11:51:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.063 11:51:38 -- common/autotest_common.sh@10 -- # set +x 00:07:49.063 ************************************ 00:07:49.063 START TEST app_cmdline 00:07:49.063 ************************************ 00:07:49.063 11:51:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:49.063 * Looking for test storage... 00:07:49.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.063 11:51:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.063 11:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.063 11:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.332 11:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.332 11:51:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:49.333 11:51:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.333 11:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.333 --rc genhtml_branch_coverage=1 00:07:49.333 --rc genhtml_function_coverage=1 00:07:49.333 --rc genhtml_legend=1 00:07:49.333 --rc geninfo_all_blocks=1 00:07:49.333 --rc geninfo_unexecuted_blocks=1 00:07:49.333 00:07:49.333 ' 00:07:49.333 11:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.333 --rc genhtml_branch_coverage=1 00:07:49.333 --rc genhtml_function_coverage=1 00:07:49.333 --rc genhtml_legend=1 00:07:49.333 --rc geninfo_all_blocks=1 00:07:49.333 --rc geninfo_unexecuted_blocks=1 00:07:49.333 00:07:49.333 ' 00:07:49.333 11:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.333 --rc genhtml_branch_coverage=1 00:07:49.333 --rc genhtml_function_coverage=1 00:07:49.333 --rc genhtml_legend=1 00:07:49.333 --rc geninfo_all_blocks=1 00:07:49.333 --rc geninfo_unexecuted_blocks=1 00:07:49.333 00:07:49.333 ' 00:07:49.333 11:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.333 --rc genhtml_branch_coverage=1 00:07:49.333 --rc genhtml_function_coverage=1 00:07:49.333 --rc genhtml_legend=1 00:07:49.333 --rc geninfo_all_blocks=1 00:07:49.333 --rc geninfo_unexecuted_blocks=1 00:07:49.333 00:07:49.333 ' 00:07:49.333 11:51:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:49.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.333 11:51:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60656 00:07:49.334 11:51:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:49.334 11:51:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60656 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60656 ']' 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.334 11:51:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.334 [2024-11-27 11:51:39.244620] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:49.334 [2024-11-27 11:51:39.244987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60656 ] 00:07:49.593 [2024-11-27 11:51:39.427437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.593 [2024-11-27 11:51:39.541027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.531 11:51:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.531 11:51:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:50.531 11:51:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:50.531 { 00:07:50.531 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:07:50.531 "fields": { 00:07:50.531 "major": 25, 00:07:50.531 "minor": 1, 00:07:50.531 "patch": 0, 00:07:50.531 "suffix": "-pre", 00:07:50.531 "commit": "2f2acf4eb" 00:07:50.531 } 00:07:50.531 } 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:50.791 11:51:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:50.791 11:51:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:50.791 request: 00:07:50.791 { 00:07:50.791 "method": "env_dpdk_get_mem_stats", 00:07:50.791 "req_id": 1 00:07:50.791 } 00:07:50.791 Got JSON-RPC error response 00:07:50.791 response: 00:07:50.791 { 00:07:50.791 "code": -32601, 00:07:50.791 "message": "Method not found" 00:07:50.791 } 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.052 11:51:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60656 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60656 ']' 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60656 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60656 00:07:51.052 killing process with pid 60656 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60656' 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 60656 00:07:51.052 11:51:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 60656 00:07:53.590 00:07:53.590 real 0m4.326s 00:07:53.590 user 0m4.463s 00:07:53.590 sys 0m0.667s 00:07:53.590 11:51:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.590 ************************************ 00:07:53.590 END TEST app_cmdline 00:07:53.590 ************************************ 00:07:53.590 11:51:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 11:51:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.590 11:51:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.590 11:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.590 11:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.590 ************************************ 00:07:53.590 START TEST version 00:07:53.590 ************************************ 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.590 * Looking for test storage... 00:07:53.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.590 11:51:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.590 11:51:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.590 11:51:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.590 11:51:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.590 11:51:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.590 11:51:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.590 11:51:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.590 11:51:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.590 11:51:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.590 11:51:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.590 11:51:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.590 11:51:43 version -- scripts/common.sh@344 -- # case "$op" in 00:07:53.590 11:51:43 version -- scripts/common.sh@345 -- # : 1 00:07:53.590 11:51:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.590 11:51:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.590 11:51:43 version -- scripts/common.sh@365 -- # decimal 1 00:07:53.590 11:51:43 version -- scripts/common.sh@353 -- # local d=1 00:07:53.590 11:51:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.590 11:51:43 version -- scripts/common.sh@355 -- # echo 1 00:07:53.590 11:51:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.590 11:51:43 version -- scripts/common.sh@366 -- # decimal 2 00:07:53.590 11:51:43 version -- scripts/common.sh@353 -- # local d=2 00:07:53.590 11:51:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.590 11:51:43 version -- scripts/common.sh@355 -- # echo 2 00:07:53.590 11:51:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.590 11:51:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.590 11:51:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.590 11:51:43 version -- scripts/common.sh@368 -- # return 0 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.590 --rc genhtml_branch_coverage=1 00:07:53.590 --rc genhtml_function_coverage=1 00:07:53.590 --rc genhtml_legend=1 00:07:53.590 --rc geninfo_all_blocks=1 00:07:53.590 --rc geninfo_unexecuted_blocks=1 00:07:53.590 00:07:53.590 ' 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.590 --rc genhtml_branch_coverage=1 00:07:53.590 --rc genhtml_function_coverage=1 00:07:53.590 --rc genhtml_legend=1 00:07:53.590 --rc geninfo_all_blocks=1 00:07:53.590 --rc geninfo_unexecuted_blocks=1 00:07:53.590 00:07:53.590 ' 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.590 --rc genhtml_branch_coverage=1 00:07:53.590 --rc genhtml_function_coverage=1 00:07:53.590 --rc genhtml_legend=1 00:07:53.590 --rc geninfo_all_blocks=1 00:07:53.590 --rc geninfo_unexecuted_blocks=1 00:07:53.590 00:07:53.590 ' 00:07:53.590 11:51:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.590 --rc genhtml_branch_coverage=1 00:07:53.590 --rc genhtml_function_coverage=1 00:07:53.590 --rc genhtml_legend=1 00:07:53.590 --rc geninfo_all_blocks=1 00:07:53.590 --rc geninfo_unexecuted_blocks=1 00:07:53.590 00:07:53.590 ' 00:07:53.590 11:51:43 version -- app/version.sh@17 -- # get_header_version major 00:07:53.590 11:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.590 11:51:43 version -- app/version.sh@17 -- # major=25 00:07:53.590 11:51:43 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.590 11:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.590 11:51:43 version -- app/version.sh@18 -- # minor=1 00:07:53.590 11:51:43 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.590 11:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.590 11:51:43 version -- app/version.sh@19 -- # patch=0 00:07:53.590 11:51:43 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.590 11:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.590 11:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.590 11:51:43 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.590 11:51:43 version -- app/version.sh@22 -- # version=25.1 00:07:53.590 11:51:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.590 11:51:43 version -- app/version.sh@28 -- # version=25.1rc0 00:07:53.590 11:51:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.590 11:51:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.850 11:51:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:53.850 11:51:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:53.850 ************************************ 00:07:53.850 00:07:53.850 real 0m0.337s 00:07:53.850 user 0m0.193s 00:07:53.850 sys 0m0.190s 00:07:53.850 11:51:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.850 11:51:43 version -- common/autotest_common.sh@10 -- # set +x 00:07:53.850 END TEST version 00:07:53.850 ************************************ 00:07:53.850 11:51:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:53.850 11:51:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:53.850 11:51:43 -- spdk/autotest.sh@194 -- # uname -s 00:07:53.850 11:51:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:53.850 11:51:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:53.850 11:51:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:53.850 11:51:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:53.850 11:51:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:53.850 11:51:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.850 11:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.850 11:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.850 ************************************ 00:07:53.850 START TEST blockdev_nvme 00:07:53.850 ************************************ 00:07:53.850 11:51:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:53.850 * Looking for test storage... 00:07:53.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:53.850 11:51:43 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.850 11:51:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.850 11:51:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.109 11:51:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.109 11:51:43 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:54.109 11:51:43 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.110 --rc genhtml_branch_coverage=1 00:07:54.110 --rc genhtml_function_coverage=1 00:07:54.110 --rc genhtml_legend=1 00:07:54.110 --rc geninfo_all_blocks=1 00:07:54.110 --rc geninfo_unexecuted_blocks=1 00:07:54.110 00:07:54.110 ' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.110 --rc genhtml_branch_coverage=1 00:07:54.110 --rc genhtml_function_coverage=1 00:07:54.110 --rc genhtml_legend=1 00:07:54.110 --rc geninfo_all_blocks=1 00:07:54.110 --rc geninfo_unexecuted_blocks=1 00:07:54.110 00:07:54.110 ' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.110 --rc genhtml_branch_coverage=1 00:07:54.110 --rc genhtml_function_coverage=1 00:07:54.110 --rc genhtml_legend=1 00:07:54.110 --rc geninfo_all_blocks=1 00:07:54.110 --rc geninfo_unexecuted_blocks=1 00:07:54.110 00:07:54.110 ' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.110 --rc genhtml_branch_coverage=1 00:07:54.110 --rc genhtml_function_coverage=1 00:07:54.110 --rc genhtml_legend=1 00:07:54.110 --rc geninfo_all_blocks=1 00:07:54.110 --rc geninfo_unexecuted_blocks=1 00:07:54.110 00:07:54.110 ' 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:54.110 11:51:43 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60850 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:54.110 11:51:43 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60850 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60850 ']' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.110 11:51:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 [2024-11-27 11:51:44.073652] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:54.110 [2024-11-27 11:51:44.074303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60850 ] 00:07:54.369 [2024-11-27 11:51:44.258168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.369 [2024-11-27 11:51:44.375442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.307 11:51:45 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.307 11:51:45 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.307 11:51:45 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:55.307 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.307 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 11:51:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:55.876 11:51:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:55.877 11:51:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7a30c16e-84e7-4a2d-8482-ae9f16cc5f30"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7a30c16e-84e7-4a2d-8482-ae9f16cc5f30",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6cad7cda-339c-4ed9-91fe-f8119e5cf0ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6cad7cda-339c-4ed9-91fe-f8119e5cf0ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "df01ebab-089c-4281-a082-85b28b34ec9c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df01ebab-089c-4281-a082-85b28b34ec9c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0e6203e3-aa93-42a1-b96b-4fab5f216db4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0e6203e3-aa93-42a1-b96b-4fab5f216db4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "26e7ae4f-59a6-48c1-9463-d85375635ab4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "26e7ae4f-59a6-48c1-9463-d85375635ab4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3de8a594-5410-4070-a602-59c31c6c8ce9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3de8a594-5410-4070-a602-59c31c6c8ce9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:55.877 11:51:45 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:55.877 11:51:45 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:55.877 11:51:45 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:55.877 11:51:45 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60850 00:07:55.877 11:51:45 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60850 ']' 00:07:55.877 11:51:45 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60850 00:07:55.877 11:51:45 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:55.877 11:51:45 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.877 11:51:45 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60850 00:07:56.136 11:51:45 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.136 killing process with pid 60850 00:07:56.136 11:51:45 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.136 11:51:45 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60850' 00:07:56.136 11:51:45 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60850 00:07:56.136 11:51:45 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60850 00:07:58.674 11:51:48 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.674 11:51:48 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.674 11:51:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:58.674 11:51:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.674 11:51:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.674 ************************************ 00:07:58.674 START TEST bdev_hello_world 00:07:58.674 ************************************ 00:07:58.674 11:51:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.674 [2024-11-27 11:51:48.400375] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:58.674 [2024-11-27 11:51:48.400501] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60945 ] 00:07:58.674 [2024-11-27 11:51:48.582987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.674 [2024-11-27 11:51:48.701701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.612 [2024-11-27 11:51:49.354836] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.612 [2024-11-27 11:51:49.354880] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.612 [2024-11-27 11:51:49.354921] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.612 [2024-11-27 11:51:49.357890] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.612 [2024-11-27 11:51:49.358621] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.612 [2024-11-27 11:51:49.358646] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.612 [2024-11-27 11:51:49.358878] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.612 00:07:59.612 [2024-11-27 11:51:49.358901] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:00.550 00:08:00.550 real 0m2.175s 00:08:00.550 user 0m1.808s 00:08:00.550 sys 0m0.259s 00:08:00.550 11:51:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.550 11:51:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.550 ************************************ 00:08:00.550 END TEST bdev_hello_world 00:08:00.550 ************************************ 00:08:00.550 11:51:50 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:00.550 11:51:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.550 11:51:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.550 11:51:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.550 ************************************ 00:08:00.550 START TEST bdev_bounds 00:08:00.550 ************************************ 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60987 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.550 Process bdevio pid: 60987 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60987' 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60987 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60987 ']' 00:08:00.550 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.551 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.551 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.551 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.551 11:51:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.809 [2024-11-27 11:51:50.654288] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:00.809 [2024-11-27 11:51:50.654425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60987 ] 00:08:00.809 [2024-11-27 11:51:50.835667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.068 [2024-11-27 11:51:50.959432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.068 [2024-11-27 11:51:50.959536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.068 [2024-11-27 11:51:50.959580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.637 11:51:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.637 11:51:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:01.638 11:51:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:01.898 I/O targets: 00:08:01.898 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:01.898 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:01.898 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.898 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.898 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.898 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:01.898 00:08:01.898 00:08:01.898 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.898 http://cunit.sourceforge.net/ 00:08:01.898 00:08:01.898 00:08:01.898 Suite: bdevio tests on: Nvme3n1 00:08:01.898 Test: blockdev write read block ...passed 00:08:01.898 Test: blockdev write zeroes read block ...passed 00:08:01.898 Test: blockdev write zeroes read no split ...passed 00:08:01.898 Test: blockdev write zeroes read split ...passed 00:08:01.898 Test: blockdev write zeroes read split partial ...passed 00:08:01.898 Test: blockdev reset ...[2024-11-27 11:51:51.831710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:01.898 [2024-11-27 11:51:51.835624] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:01.898 passed 00:08:01.898 Test: blockdev write read 8 blocks ...passed 00:08:01.898 Test: blockdev write read size > 128k ...passed 00:08:01.898 Test: blockdev write read invalid size ...passed 00:08:01.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.898 Test: blockdev write read max offset ...passed 00:08:01.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.898 Test: blockdev writev readv 8 blocks ...passed 00:08:01.898 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.898 Test: blockdev writev readv block ...passed 00:08:01.898 Test: blockdev writev readv size > 128k ...passed 00:08:01.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.898 Test: blockdev comparev and writev ...[2024-11-27 11:51:51.844046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b660a000 len:0x1000 00:08:01.898 passed 00:08:01.898 Test: blockdev nvme passthru rw ...[2024-11-27 11:51:51.844296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.898 passed 00:08:01.898 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:51.845160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.898 passed 00:08:01.898 Test: blockdev nvme admin passthru ...[2024-11-27 11:51:51.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.898 passed 00:08:01.898 Test: blockdev copy ...passed 00:08:01.898 Suite: bdevio tests on: Nvme2n3 00:08:01.898 Test: blockdev write read block ...passed 00:08:01.898 Test: blockdev write zeroes read block ...passed 00:08:01.898 Test: blockdev write zeroes read no split ...passed 00:08:01.898 Test: blockdev write zeroes read split ...passed 00:08:01.898 Test: blockdev write zeroes read split partial ...passed 00:08:01.898 Test: blockdev reset ...[2024-11-27 11:51:51.924812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:01.898 [2024-11-27 11:51:51.928943] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:01.898 Test: blockdev write read 8 blocks ...uccessful. 00:08:01.898 passed 00:08:01.898 Test: blockdev write read size > 128k ...passed 00:08:01.898 Test: blockdev write read invalid size ...passed 00:08:01.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.898 Test: blockdev write read max offset ...passed 00:08:01.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.898 Test: blockdev writev readv 8 blocks ...passed 00:08:01.898 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.898 Test: blockdev writev readv block ...passed 00:08:01.898 Test: blockdev writev readv size > 128k ...passed 00:08:01.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.898 Test: blockdev comparev and writev ...[2024-11-27 11:51:51.938148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:01.898 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x299006000 len:0x1000 00:08:01.898 [2024-11-27 11:51:51.938386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.898 passed 00:08:01.898 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:51.939304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.898 passed 00:08:01.898 Test: blockdev nvme admin passthru ...[2024-11-27 11:51:51.939501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.898 passed 00:08:01.898 Test: blockdev copy ...passed 00:08:01.898 Suite: bdevio tests on: Nvme2n2 00:08:01.898 Test: blockdev write read block ...passed 00:08:01.898 Test: blockdev write zeroes read block ...passed 00:08:02.158 Test: blockdev write zeroes read no split ...passed 00:08:02.158 Test: blockdev write zeroes read split ...passed 00:08:02.158 Test: blockdev write zeroes read split partial ...passed 00:08:02.158 Test: blockdev reset ...[2024-11-27 11:51:52.016759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:02.158 [2024-11-27 11:51:52.020976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:02.158 passed 00:08:02.158 Test: blockdev write read 8 blocks ...passed 00:08:02.158 Test: blockdev write read size > 128k ...passed 00:08:02.158 Test: blockdev write read invalid size ...passed 00:08:02.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.159 Test: blockdev write read max offset ...passed 00:08:02.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.159 Test: blockdev writev readv 8 blocks ...passed 00:08:02.159 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.159 Test: blockdev writev readv block ...passed 00:08:02.159 Test: blockdev writev readv size > 128k ...passed 00:08:02.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.159 Test: blockdev comparev and writev ...[2024-11-27 11:51:52.030705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:02.159 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c663c000 len:0x1000 00:08:02.159 [2024-11-27 11:51:52.030918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.159 passed 00:08:02.159 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:52.031893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:02.159 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:02.159 [2024-11-27 11:51:52.032088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.159 passed 00:08:02.159 Test: blockdev copy ...passed 00:08:02.159 Suite: bdevio tests on: Nvme2n1 00:08:02.159 Test: blockdev write read block ...passed 00:08:02.159 Test: blockdev write zeroes read block ...passed 00:08:02.159 Test: blockdev write zeroes read no split ...passed 00:08:02.159 Test: blockdev write zeroes read split ...passed 00:08:02.159 Test: blockdev write zeroes read split partial ...passed 00:08:02.159 Test: blockdev reset ...[2024-11-27 11:51:52.109352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:02.159 [2024-11-27 11:51:52.113327] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:02.159 passed 00:08:02.159 Test: blockdev write read 8 blocks ...passed 00:08:02.159 Test: blockdev write read size > 128k ...passed 00:08:02.159 Test: blockdev write read invalid size ...passed 00:08:02.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.159 Test: blockdev write read max offset ...passed 00:08:02.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.159 Test: blockdev writev readv 8 blocks ...passed 00:08:02.159 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.159 Test: blockdev writev readv block ...passed 00:08:02.159 Test: blockdev writev readv size > 128k ...passed 00:08:02.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.159 Test: blockdev comparev and writev ...[2024-11-27 11:51:52.123452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6638000 len:0x1000 00:08:02.159 [2024-11-27 11:51:52.123704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.159 passed 00:08:02.159 Test: blockdev nvme passthru rw ...passed 00:08:02.159 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:52.125029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.159 [2024-11-27 11:51:52.125247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.159 passed 00:08:02.159 Test: blockdev nvme admin passthru ...passed 00:08:02.159 Test: blockdev copy ...passed 00:08:02.159 Suite: bdevio tests on: Nvme1n1 00:08:02.159 Test: blockdev write read block ...passed 00:08:02.159 Test: blockdev write zeroes read block ...passed 00:08:02.159 Test: blockdev write zeroes read no split ...passed 00:08:02.159 Test: blockdev write zeroes read split ...passed 00:08:02.159 Test: blockdev write zeroes read split partial ...passed 00:08:02.159 Test: blockdev reset ...[2024-11-27 11:51:52.200695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:02.159 passed 00:08:02.159 Test: blockdev write read 8 blocks ...[2024-11-27 11:51:52.204340] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:02.159 passed 00:08:02.159 Test: blockdev write read size > 128k ...passed 00:08:02.159 Test: blockdev write read invalid size ...passed 00:08:02.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.159 Test: blockdev write read max offset ...passed 00:08:02.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.159 Test: blockdev writev readv 8 blocks ...passed 00:08:02.419 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.419 Test: blockdev writev readv block ...passed 00:08:02.419 Test: blockdev writev readv size > 128k ...passed 00:08:02.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.419 Test: blockdev comparev and writev ...[2024-11-27 11:51:52.213254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6634000 len:0x1000 00:08:02.419 passed 00:08:02.419 Test: blockdev nvme passthru rw ...[2024-11-27 11:51:52.213484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.419 passed 00:08:02.419 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:52.214426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.419 passed 00:08:02.419 Test: blockdev nvme admin passthru ...[2024-11-27 11:51:52.214611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.419 passed 00:08:02.419 Test: blockdev copy ...passed 00:08:02.419 Suite: bdevio tests on: Nvme0n1 00:08:02.419 Test: blockdev write read block ...passed 00:08:02.419 Test: blockdev write zeroes read block ...passed 00:08:02.419 Test: blockdev write zeroes read no split ...passed 00:08:02.419 Test: blockdev write zeroes read split ...passed 00:08:02.419 Test: blockdev write zeroes read split partial ...passed 00:08:02.419 Test: blockdev reset ...[2024-11-27 11:51:52.294427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:02.419 [2024-11-27 11:51:52.298259] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:02.419 passed 00:08:02.419 Test: blockdev write read 8 blocks ...passed 00:08:02.419 Test: blockdev write read size > 128k ...passed 00:08:02.419 Test: blockdev write read invalid size ...passed 00:08:02.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.419 Test: blockdev write read max offset ...passed 00:08:02.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.419 Test: blockdev writev readv 8 blocks ...passed 00:08:02.419 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.419 Test: blockdev writev readv block ...passed 00:08:02.419 Test: blockdev writev readv size > 128k ...passed 00:08:02.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.419 Test: blockdev comparev and writev ...passed 00:08:02.419 Test: blockdev nvme passthru rw ...[2024-11-27 11:51:52.307263] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:02.419 separate metadata which is not supported yet. 00:08:02.419 passed 00:08:02.419 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:51:52.308095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:02.419 [2024-11-27 11:51:52.308383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:08:02.419 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:08:02.419 passed 00:08:02.419 Test: blockdev copy ...passed 00:08:02.419 00:08:02.419 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.419 suites 6 6 n/a 0 0 00:08:02.419 tests 138 138 138 0 0 00:08:02.419 asserts 893 893 893 0 n/a 00:08:02.419 00:08:02.419 Elapsed time = 1.483 seconds 00:08:02.419 0 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60987 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60987 ']' 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60987 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60987 00:08:02.419 killing process with pid 60987 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60987' 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60987 00:08:02.419 11:51:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60987 00:08:03.384 11:51:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:03.384 00:08:03.384 real 0m2.867s 00:08:03.384 user 0m7.315s 00:08:03.384 sys 0m0.402s 00:08:03.384 11:51:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.384 11:51:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:03.384 ************************************ 00:08:03.384 END TEST bdev_bounds 00:08:03.384 ************************************ 00:08:03.676 11:51:53 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.676 11:51:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.676 11:51:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.676 11:51:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.676 ************************************ 00:08:03.676 START TEST bdev_nbd 00:08:03.676 ************************************ 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61055 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61055 /var/tmp/spdk-nbd.sock 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61055 ']' 00:08:03.676 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:03.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:03.677 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.677 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:03.677 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.677 11:51:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.677 [2024-11-27 11:51:53.606552] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:03.677 [2024-11-27 11:51:53.606679] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.952 [2024-11-27 11:51:53.787265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.952 [2024-11-27 11:51:53.908649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.890 1+0 records in 00:08:04.890 1+0 records out 00:08:04.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064996 s, 6.3 MB/s 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:04.890 11:51:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.149 1+0 records in 00:08:05.149 1+0 records out 00:08:05.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606192 s, 6.8 MB/s 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.149 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.409 1+0 records in 00:08:05.409 1+0 records out 00:08:05.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710088 s, 5.8 MB/s 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.409 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.669 1+0 records in 00:08:05.669 1+0 records out 00:08:05.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757836 s, 5.4 MB/s 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.669 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.928 1+0 records in 00:08:05.928 1+0 records out 00:08:05.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591588 s, 6.9 MB/s 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:05.928 11:51:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.188 1+0 records in 00:08:06.188 1+0 records out 00:08:06.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802874 s, 5.1 MB/s 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:06.188 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd0", 00:08:06.447 "bdev_name": "Nvme0n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd1", 00:08:06.447 "bdev_name": "Nvme1n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd2", 00:08:06.447 "bdev_name": "Nvme2n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd3", 00:08:06.447 "bdev_name": "Nvme2n2" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd4", 00:08:06.447 "bdev_name": "Nvme2n3" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd5", 00:08:06.447 "bdev_name": "Nvme3n1" 00:08:06.447 } 00:08:06.447 ]' 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd0", 00:08:06.447 "bdev_name": "Nvme0n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd1", 00:08:06.447 "bdev_name": "Nvme1n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd2", 00:08:06.447 "bdev_name": "Nvme2n1" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd3", 00:08:06.447 "bdev_name": "Nvme2n2" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd4", 00:08:06.447 "bdev_name": "Nvme2n3" 00:08:06.447 }, 00:08:06.447 { 00:08:06.447 "nbd_device": "/dev/nbd5", 00:08:06.447 "bdev_name": "Nvme3n1" 00:08:06.447 } 00:08:06.447 ]' 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.447 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.706 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.965 11:51:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.224 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.484 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.743 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.003 11:51:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:08.263 /dev/nbd0 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.263 1+0 records in 00:08:08.263 1+0 records out 00:08:08.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438084 s, 9.3 MB/s 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.263 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:08.523 /dev/nbd1 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.523 1+0 records in 00:08:08.523 1+0 records out 00:08:08.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539656 s, 7.6 MB/s 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.523 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:08.783 /dev/nbd10 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.783 1+0 records in 00:08:08.783 1+0 records out 00:08:08.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476273 s, 8.6 MB/s 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:08.783 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:09.043 /dev/nbd11 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.043 1+0 records in 00:08:09.043 1+0 records out 00:08:09.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109482 s, 3.7 MB/s 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.043 11:51:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:09.301 /dev/nbd12 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.301 1+0 records in 00:08:09.301 1+0 records out 00:08:09.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921544 s, 4.4 MB/s 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.301 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:09.561 /dev/nbd13 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.561 1+0 records in 00:08:09.561 1+0 records out 00:08:09.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087473 s, 4.7 MB/s 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.561 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd0", 00:08:09.820 "bdev_name": "Nvme0n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd1", 00:08:09.820 "bdev_name": "Nvme1n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd10", 00:08:09.820 "bdev_name": "Nvme2n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd11", 00:08:09.820 "bdev_name": "Nvme2n2" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd12", 00:08:09.820 "bdev_name": "Nvme2n3" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd13", 00:08:09.820 "bdev_name": "Nvme3n1" 00:08:09.820 } 00:08:09.820 ]' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd0", 00:08:09.820 "bdev_name": "Nvme0n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd1", 00:08:09.820 "bdev_name": "Nvme1n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd10", 00:08:09.820 "bdev_name": "Nvme2n1" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd11", 00:08:09.820 "bdev_name": "Nvme2n2" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd12", 00:08:09.820 "bdev_name": "Nvme2n3" 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "nbd_device": "/dev/nbd13", 00:08:09.820 "bdev_name": "Nvme3n1" 00:08:09.820 } 00:08:09.820 ]' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.820 /dev/nbd1 00:08:09.820 /dev/nbd10 00:08:09.820 /dev/nbd11 00:08:09.820 /dev/nbd12 00:08:09.820 /dev/nbd13' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.820 /dev/nbd1 00:08:09.820 /dev/nbd10 00:08:09.820 /dev/nbd11 00:08:09.820 /dev/nbd12 00:08:09.820 /dev/nbd13' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.820 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:09.820 256+0 records in 00:08:09.820 256+0 records out 00:08:09.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126007 s, 83.2 MB/s 00:08:09.821 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.821 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.821 256+0 records in 00:08:09.821 256+0 records out 00:08:09.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125679 s, 8.3 MB/s 00:08:09.821 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.821 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.079 256+0 records in 00:08:10.079 256+0 records out 00:08:10.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127594 s, 8.2 MB/s 00:08:10.079 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.079 11:51:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:10.079 256+0 records in 00:08:10.079 256+0 records out 00:08:10.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128998 s, 8.1 MB/s 00:08:10.079 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.079 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:10.338 256+0 records in 00:08:10.338 256+0 records out 00:08:10.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12922 s, 8.1 MB/s 00:08:10.338 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.338 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:10.597 256+0 records in 00:08:10.597 256+0 records out 00:08:10.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134646 s, 7.8 MB/s 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:10.597 256+0 records in 00:08:10.597 256+0 records out 00:08:10.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131504 s, 8.0 MB/s 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.597 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.598 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.857 11:52:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.116 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.376 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.634 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.893 11:52:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:12.152 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:12.411 malloc_lvol_verify 00:08:12.411 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:12.670 dd4793ff-65d5-4fcb-8fc1-7eb168168cee 00:08:12.670 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:12.930 4a75300a-d9b8-4627-9a2e-f3b6c729d7bc 00:08:12.930 11:52:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:13.189 /dev/nbd0 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:13.189 mke2fs 1.47.0 (5-Feb-2023) 00:08:13.189 Discarding device blocks: 0/4096 done 00:08:13.189 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:13.189 00:08:13.189 Allocating group tables: 0/1 done 00:08:13.189 Writing inode tables: 0/1 done 00:08:13.189 Creating journal (1024 blocks): done 00:08:13.189 Writing superblocks and filesystem accounting information: 0/1 done 00:08:13.189 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.189 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61055 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61055 ']' 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61055 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61055 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.448 killing process with pid 61055 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61055' 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61055 00:08:13.448 11:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61055 00:08:14.823 11:52:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:14.823 00:08:14.823 real 0m11.019s 00:08:14.823 user 0m14.336s 00:08:14.823 sys 0m4.424s 00:08:14.823 11:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.823 ************************************ 00:08:14.823 END TEST bdev_nbd 00:08:14.823 11:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:14.823 ************************************ 00:08:14.823 11:52:04 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:14.823 11:52:04 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:14.823 skipping fio tests on NVMe due to multi-ns failures. 00:08:14.823 11:52:04 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:14.823 11:52:04 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:14.823 11:52:04 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.823 11:52:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:14.823 11:52:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.823 11:52:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.823 ************************************ 00:08:14.823 START TEST bdev_verify 00:08:14.823 ************************************ 00:08:14.823 11:52:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.823 [2024-11-27 11:52:04.694798] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:14.823 [2024-11-27 11:52:04.694918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:08:15.081 [2024-11-27 11:52:04.876677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.081 [2024-11-27 11:52:04.994963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.081 [2024-11-27 11:52:04.994988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.016 Running I/O for 5 seconds... 00:08:17.887 19584.00 IOPS, 76.50 MiB/s [2024-11-27T11:52:09.409Z] 21504.00 IOPS, 84.00 MiB/s [2024-11-27T11:52:09.990Z] 21589.33 IOPS, 84.33 MiB/s [2024-11-27T11:52:10.928Z] 21072.00 IOPS, 82.31 MiB/s [2024-11-27T11:52:10.928Z] 21068.80 IOPS, 82.30 MiB/s 00:08:20.875 Latency(us) 00:08:20.875 [2024-11-27T11:52:10.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.875 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0xbd0bd 00:08:20.875 Nvme0n1 : 5.06 1721.85 6.73 0.00 0.00 74191.70 11475.38 78748.48 00:08:20.875 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:20.875 Nvme0n1 : 5.06 1747.13 6.82 0.00 0.00 73104.01 14528.46 77485.13 00:08:20.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0xa0000 00:08:20.875 Nvme1n1 : 5.06 1720.88 6.72 0.00 0.00 74098.85 13265.12 76221.79 00:08:20.875 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0xa0000 length 0xa0000 00:08:20.875 Nvme1n1 : 5.06 1746.15 6.82 0.00 0.00 73011.85 17476.27 70326.18 00:08:20.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0x80000 00:08:20.875 Nvme2n1 : 5.06 1719.92 6.72 0.00 0.00 73899.16 14317.91 74958.44 00:08:20.875 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x80000 length 0x80000 00:08:20.875 Nvme2n1 : 5.06 1745.18 6.82 0.00 0.00 72811.71 18423.78 66536.15 00:08:20.875 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0x80000 00:08:20.875 Nvme2n2 : 5.06 1718.98 6.71 0.00 0.00 73803.98 16107.64 75800.67 00:08:20.875 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x80000 length 0x80000 00:08:20.875 Nvme2n2 : 5.06 1744.23 6.81 0.00 0.00 72672.61 18107.94 65272.80 00:08:20.875 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0x80000 00:08:20.875 Nvme2n3 : 5.07 1718.06 6.71 0.00 0.00 73711.57 16844.59 77485.13 00:08:20.875 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x80000 length 0x80000 00:08:20.875 Nvme2n3 : 5.07 1743.29 6.81 0.00 0.00 72571.76 17055.15 69062.84 00:08:20.875 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x0 length 0x20000 00:08:20.875 Nvme3n1 : 5.07 1717.20 6.71 0.00 0.00 73605.70 15897.09 79169.59 00:08:20.875 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.875 Verification LBA range: start 0x20000 length 0x20000 00:08:20.875 Nvme3n1 : 5.10 1756.51 6.86 0.00 0.00 72000.96 5606.09 73273.99 00:08:20.875 [2024-11-27T11:52:10.928Z] =================================================================================================================== 00:08:20.875 [2024-11-27T11:52:10.928Z] Total : 20799.37 81.25 0.00 0.00 73284.42 5606.09 79169.59 00:08:22.254 00:08:22.254 real 0m7.622s 00:08:22.254 user 0m14.076s 00:08:22.254 sys 0m0.317s 00:08:22.254 11:52:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.254 11:52:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.254 ************************************ 00:08:22.254 END TEST bdev_verify 00:08:22.254 ************************************ 00:08:22.254 11:52:12 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.255 11:52:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:22.255 11:52:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.255 11:52:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.255 ************************************ 00:08:22.255 START TEST bdev_verify_big_io 00:08:22.255 ************************************ 00:08:22.255 11:52:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.513 [2024-11-27 11:52:12.393926] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:22.513 [2024-11-27 11:52:12.394075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61537 ] 00:08:22.771 [2024-11-27 11:52:12.583247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.771 [2024-11-27 11:52:12.697693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.771 [2024-11-27 11:52:12.697718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.707 Running I/O for 5 seconds... 00:08:28.783 2137.00 IOPS, 133.56 MiB/s [2024-11-27T11:52:19.404Z] 3253.00 IOPS, 203.31 MiB/s [2024-11-27T11:52:19.404Z] 3918.67 IOPS, 244.92 MiB/s 00:08:29.351 Latency(us) 00:08:29.351 [2024-11-27T11:52:19.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.351 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0xbd0b 00:08:29.351 Nvme0n1 : 5.32 192.33 12.02 0.00 0.00 648691.97 23477.15 764744.58 00:08:29.351 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:29.351 Nvme0n1 : 5.70 154.95 9.68 0.00 0.00 811241.11 15581.25 798433.77 00:08:29.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0xa000 00:08:29.351 Nvme1n1 : 5.43 192.17 12.01 0.00 0.00 628596.90 69062.84 650201.34 00:08:29.351 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0xa000 length 0xa000 00:08:29.351 Nvme1n1 : 5.70 153.34 9.58 0.00 0.00 798853.68 39584.80 902870.26 00:08:29.351 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0x8000 00:08:29.351 Nvme2n1 : 5.52 202.79 12.67 0.00 0.00 589716.09 40848.14 663677.02 00:08:29.351 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x8000 length 0x8000 00:08:29.351 Nvme2n1 : 5.70 153.13 9.57 0.00 0.00 778357.86 42532.60 909608.10 00:08:29.351 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0x8000 00:08:29.351 Nvme2n2 : 5.58 206.49 12.91 0.00 0.00 565468.70 53060.47 683890.53 00:08:29.351 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x8000 length 0x8000 00:08:29.351 Nvme2n2 : 5.70 150.45 9.40 0.00 0.00 770901.55 42322.04 929821.61 00:08:29.351 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0x8000 00:08:29.351 Nvme2n3 : 5.63 215.83 13.49 0.00 0.00 529774.35 20529.35 761375.67 00:08:29.351 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x8000 length 0x8000 00:08:29.351 Nvme2n3 : 5.71 148.64 9.29 0.00 0.00 766317.50 17686.82 1354305.39 00:08:29.351 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x0 length 0x2000 00:08:29.351 Nvme3n1 : 5.69 235.98 14.75 0.00 0.00 474494.46 1401.52 768113.50 00:08:29.351 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.351 Verification LBA range: start 0x2000 length 0x2000 00:08:29.351 Nvme3n1 : 5.72 160.27 10.02 0.00 0.00 694809.04 4948.10 1374518.90 00:08:29.351 [2024-11-27T11:52:19.404Z] =================================================================================================================== 00:08:29.351 [2024-11-27T11:52:19.404Z] Total : 2166.38 135.40 0.00 0.00 654749.58 1401.52 1374518.90 00:08:31.256 ************************************ 00:08:31.257 END TEST bdev_verify_big_io 00:08:31.257 ************************************ 00:08:31.257 00:08:31.257 real 0m8.835s 00:08:31.257 user 0m16.462s 00:08:31.257 sys 0m0.350s 00:08:31.257 11:52:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.257 11:52:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:31.257 11:52:21 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.257 11:52:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:31.257 11:52:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.257 11:52:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.257 ************************************ 00:08:31.257 START TEST bdev_write_zeroes 00:08:31.257 ************************************ 00:08:31.257 11:52:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.257 [2024-11-27 11:52:21.302316] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:31.257 [2024-11-27 11:52:21.302612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61657 ] 00:08:31.516 [2024-11-27 11:52:21.484811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.775 [2024-11-27 11:52:21.598817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.343 Running I/O for 1 seconds... 00:08:33.275 76032.00 IOPS, 297.00 MiB/s 00:08:33.275 Latency(us) 00:08:33.275 [2024-11-27T11:52:23.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.275 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme0n1 : 1.02 12623.63 49.31 0.00 0.00 10120.60 8211.74 29688.60 00:08:33.276 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme1n1 : 1.02 12612.12 49.27 0.00 0.00 10119.19 8474.94 29688.60 00:08:33.276 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme2n1 : 1.02 12600.98 49.22 0.00 0.00 10092.53 8264.38 27161.91 00:08:33.276 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme2n2 : 1.02 12590.53 49.18 0.00 0.00 10060.08 8159.10 26846.07 00:08:33.276 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme2n3 : 1.02 12580.23 49.14 0.00 0.00 10029.60 8211.74 24529.94 00:08:33.276 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:33.276 Nvme3n1 : 1.02 12569.85 49.10 0.00 0.00 10005.59 6843.12 22003.25 00:08:33.276 [2024-11-27T11:52:23.329Z] =================================================================================================================== 00:08:33.276 [2024-11-27T11:52:23.329Z] Total : 75577.35 295.22 0.00 0.00 10071.26 6843.12 29688.60 00:08:34.654 00:08:34.654 real 0m3.230s 00:08:34.654 user 0m2.867s 00:08:34.654 sys 0m0.245s 00:08:34.654 11:52:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.654 ************************************ 00:08:34.654 END TEST bdev_write_zeroes 00:08:34.654 ************************************ 00:08:34.654 11:52:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:34.654 11:52:24 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.654 11:52:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:34.654 11:52:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.654 11:52:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.654 ************************************ 00:08:34.654 START TEST bdev_json_nonenclosed 00:08:34.654 ************************************ 00:08:34.654 11:52:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.654 [2024-11-27 11:52:24.600022] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:34.654 [2024-11-27 11:52:24.600128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61710 ] 00:08:34.914 [2024-11-27 11:52:24.780561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.914 [2024-11-27 11:52:24.891527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.914 [2024-11-27 11:52:24.891625] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:34.914 [2024-11-27 11:52:24.891648] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:34.914 [2024-11-27 11:52:24.891660] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.173 00:08:35.173 real 0m0.631s 00:08:35.173 user 0m0.380s 00:08:35.173 sys 0m0.146s 00:08:35.173 ************************************ 00:08:35.173 END TEST bdev_json_nonenclosed 00:08:35.173 ************************************ 00:08:35.173 11:52:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.173 11:52:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:35.173 11:52:25 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.173 11:52:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:35.173 11:52:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.173 11:52:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.173 ************************************ 00:08:35.173 START TEST bdev_json_nonarray 00:08:35.173 ************************************ 00:08:35.173 11:52:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.432 [2024-11-27 11:52:25.306304] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:35.432 [2024-11-27 11:52:25.306604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61736 ] 00:08:35.692 [2024-11-27 11:52:25.483926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.692 [2024-11-27 11:52:25.595125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.692 [2024-11-27 11:52:25.595236] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:35.692 [2024-11-27 11:52:25.595258] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.692 [2024-11-27 11:52:25.595270] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.951 00:08:35.951 real 0m0.625s 00:08:35.951 user 0m0.377s 00:08:35.951 sys 0m0.143s 00:08:35.951 11:52:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.951 ************************************ 00:08:35.951 END TEST bdev_json_nonarray 00:08:35.951 ************************************ 00:08:35.951 11:52:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:35.951 11:52:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:35.951 00:08:35.951 real 0m42.193s 00:08:35.951 user 1m2.305s 00:08:35.951 sys 0m7.500s 00:08:35.951 11:52:25 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.951 ************************************ 00:08:35.951 END TEST blockdev_nvme 00:08:35.951 ************************************ 00:08:35.951 11:52:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.951 11:52:25 -- spdk/autotest.sh@209 -- # uname -s 00:08:35.951 11:52:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:35.951 11:52:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:35.951 11:52:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.951 11:52:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.951 11:52:25 -- common/autotest_common.sh@10 -- # set +x 00:08:36.211 ************************************ 00:08:36.211 START TEST blockdev_nvme_gpt 00:08:36.211 ************************************ 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:36.211 * Looking for test storage... 00:08:36.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.211 11:52:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.211 --rc genhtml_branch_coverage=1 00:08:36.211 --rc genhtml_function_coverage=1 00:08:36.211 --rc genhtml_legend=1 00:08:36.211 --rc geninfo_all_blocks=1 00:08:36.211 --rc geninfo_unexecuted_blocks=1 00:08:36.211 00:08:36.211 ' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.211 --rc genhtml_branch_coverage=1 00:08:36.211 --rc genhtml_function_coverage=1 00:08:36.211 --rc genhtml_legend=1 00:08:36.211 --rc geninfo_all_blocks=1 00:08:36.211 --rc geninfo_unexecuted_blocks=1 00:08:36.211 00:08:36.211 ' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.211 --rc genhtml_branch_coverage=1 00:08:36.211 --rc genhtml_function_coverage=1 00:08:36.211 --rc genhtml_legend=1 00:08:36.211 --rc geninfo_all_blocks=1 00:08:36.211 --rc geninfo_unexecuted_blocks=1 00:08:36.211 00:08:36.211 ' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.211 --rc genhtml_branch_coverage=1 00:08:36.211 --rc genhtml_function_coverage=1 00:08:36.211 --rc genhtml_legend=1 00:08:36.211 --rc geninfo_all_blocks=1 00:08:36.211 --rc geninfo_unexecuted_blocks=1 00:08:36.211 00:08:36.211 ' 00:08:36.211 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:36.211 11:52:26 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:36.211 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:36.211 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.211 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61820 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:36.212 11:52:26 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61820 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61820 ']' 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.212 11:52:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.471 [2024-11-27 11:52:26.360340] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:36.471 [2024-11-27 11:52:26.361122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:08:36.731 [2024-11-27 11:52:26.539550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.731 [2024-11-27 11:52:26.646237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.689 11:52:27 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.689 11:52:27 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:37.689 11:52:27 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:37.689 11:52:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:37.689 11:52:27 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:37.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.208 Waiting for block devices as requested 00:08:38.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.466 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.466 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.726 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.999 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:43.999 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:43.999 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:43.999 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:43.999 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:44.000 BYT; 00:08:44.000 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:44.000 BYT; 00:08:44.000 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.000 11:52:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.000 11:52:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:44.936 The operation has completed successfully. 00:08:44.936 11:52:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:46.316 The operation has completed successfully. 00:08:46.316 11:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:46.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.514 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.514 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:47.514 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.514 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.514 [] 00:08:47.514 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.514 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:47.514 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:47.514 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:47.514 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.774 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:47.774 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.774 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:48.033 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 11:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 11:52:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.033 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:48.033 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:48.033 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:48.033 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.033 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.293 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.293 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:48.293 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:48.294 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "849e550f-e3a7-44c5-bec2-59f4bd218445"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "849e550f-e3a7-44c5-bec2-59f4bd218445",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "83b15e81-9de0-42c1-bfef-5e1097b3a820"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83b15e81-9de0-42c1-bfef-5e1097b3a820",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4c8837d0-8d3f-4425-ba8a-1622857e3a4c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c8837d0-8d3f-4425-ba8a-1622857e3a4c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6bedc533-bb41-48cc-a68b-e5faef22b902"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6bedc533-bb41-48cc-a68b-e5faef22b902",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "280ed79f-9845-4533-af35-58601b8722a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "280ed79f-9845-4533-af35-58601b8722a3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:48.294 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:48.294 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:48.294 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:48.294 11:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61820 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61820 ']' 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61820 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61820 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.294 killing process with pid 61820 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61820' 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61820 00:08:48.294 11:52:38 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61820 00:08:50.862 11:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:50.862 11:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:50.862 11:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:50.862 11:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.862 11:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.862 ************************************ 00:08:50.862 START TEST bdev_hello_world 00:08:50.862 ************************************ 00:08:50.862 11:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:50.862 [2024-11-27 11:52:40.601186] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:50.862 [2024-11-27 11:52:40.601311] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62463 ] 00:08:50.862 [2024-11-27 11:52:40.779423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.862 [2024-11-27 11:52:40.883438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.797 [2024-11-27 11:52:41.532956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:51.797 [2024-11-27 11:52:41.532998] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:51.797 [2024-11-27 11:52:41.533021] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:51.797 [2024-11-27 11:52:41.536051] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:51.797 [2024-11-27 11:52:41.536723] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:51.797 [2024-11-27 11:52:41.536861] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:51.797 [2024-11-27 11:52:41.537055] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:51.797 00:08:51.797 [2024-11-27 11:52:41.537079] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:52.735 00:08:52.735 real 0m2.103s 00:08:52.735 user 0m1.748s 00:08:52.735 sys 0m0.246s 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:52.735 ************************************ 00:08:52.735 END TEST bdev_hello_world 00:08:52.735 ************************************ 00:08:52.735 11:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:52.735 11:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.735 11:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.735 11:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:52.735 ************************************ 00:08:52.735 START TEST bdev_bounds 00:08:52.735 ************************************ 00:08:52.735 Process bdevio pid: 62511 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62511 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62511' 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62511 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62511 ']' 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.735 11:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:52.994 [2024-11-27 11:52:42.790014] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:52.994 [2024-11-27 11:52:42.790731] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62511 ] 00:08:52.994 [2024-11-27 11:52:42.969499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.253 [2024-11-27 11:52:43.076982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.253 [2024-11-27 11:52:43.077149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.253 [2024-11-27 11:52:43.077176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.820 11:52:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.820 11:52:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:53.820 11:52:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:53.820 I/O targets: 00:08:53.820 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:53.820 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:53.820 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:53.820 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.820 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.820 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.820 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:53.820 00:08:53.820 00:08:53.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.820 http://cunit.sourceforge.net/ 00:08:53.820 00:08:53.820 00:08:53.820 Suite: bdevio tests on: Nvme3n1 00:08:53.820 Test: blockdev write read block ...passed 00:08:53.820 Test: blockdev write zeroes read block ...passed 00:08:53.820 Test: blockdev write zeroes read no split ...passed 00:08:54.080 Test: blockdev write zeroes read split ...passed 00:08:54.080 Test: blockdev write zeroes read split partial ...passed 00:08:54.080 Test: blockdev reset ...[2024-11-27 11:52:43.914168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:54.080 [2024-11-27 11:52:43.918635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:54.080 Test: blockdev write read 8 blocks ...uccessful. 00:08:54.080 passed 00:08:54.080 Test: blockdev write read size > 128k ...passed 00:08:54.080 Test: blockdev write read invalid size ...passed 00:08:54.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.080 Test: blockdev write read max offset ...passed 00:08:54.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.080 Test: blockdev writev readv 8 blocks ...passed 00:08:54.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.080 Test: blockdev writev readv block ...passed 00:08:54.080 Test: blockdev writev readv size > 128k ...passed 00:08:54.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.080 Test: blockdev comparev and writev ...[2024-11-27 11:52:43.929443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e04000 len:0x1000 00:08:54.080 [2024-11-27 11:52:43.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.080 passed 00:08:54.080 Test: blockdev nvme passthru rw ...passed 00:08:54.080 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.080 Test: blockdev nvme admin passthru ...[2024-11-27 11:52:43.930409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.080 [2024-11-27 11:52:43.930462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.080 passed 00:08:54.080 Test: blockdev copy ...passed 00:08:54.080 Suite: bdevio tests on: Nvme2n3 00:08:54.080 Test: blockdev write read block ...passed 00:08:54.080 Test: blockdev write zeroes read block ...passed 00:08:54.080 Test: blockdev write zeroes read no split ...passed 00:08:54.080 Test: blockdev write zeroes read split ...passed 00:08:54.080 Test: blockdev write zeroes read split partial ...passed 00:08:54.080 Test: blockdev reset ...[2024-11-27 11:52:44.005427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:54.080 passed 00:08:54.080 Test: blockdev write read 8 blocks ...[2024-11-27 11:52:44.010672] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:54.080 passed 00:08:54.080 Test: blockdev write read size > 128k ...passed 00:08:54.080 Test: blockdev write read invalid size ...passed 00:08:54.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.080 Test: blockdev write read max offset ...passed 00:08:54.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.080 Test: blockdev writev readv 8 blocks ...passed 00:08:54.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.080 Test: blockdev writev readv block ...passed 00:08:54.080 Test: blockdev writev readv size > 128k ...passed 00:08:54.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.080 Test: blockdev comparev and writev ...[2024-11-27 11:52:44.019191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3e02000 len:0x1000 00:08:54.080 [2024-11-27 11:52:44.019237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.080 passed 00:08:54.080 Test: blockdev nvme passthru rw ...passed 00:08:54.080 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.080 Test: blockdev nvme admin passthru ...[2024-11-27 11:52:44.020175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.080 [2024-11-27 11:52:44.020217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.080 passed 00:08:54.080 Test: blockdev copy ...passed 00:08:54.080 Suite: bdevio tests on: Nvme2n2 00:08:54.080 Test: blockdev write read block ...passed 00:08:54.080 Test: blockdev write zeroes read block ...passed 00:08:54.080 Test: blockdev write zeroes read no split ...passed 00:08:54.080 Test: blockdev write zeroes read split ...passed 00:08:54.080 Test: blockdev write zeroes read split partial ...passed 00:08:54.080 Test: blockdev reset ...[2024-11-27 11:52:44.096345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:54.080 [2024-11-27 11:52:44.101042] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:54.080 passed 00:08:54.080 Test: blockdev write read 8 blocks ...passed 00:08:54.080 Test: blockdev write read size > 128k ...passed 00:08:54.080 Test: blockdev write read invalid size ...passed 00:08:54.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.080 Test: blockdev write read max offset ...passed 00:08:54.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.080 Test: blockdev writev readv 8 blocks ...passed 00:08:54.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.080 Test: blockdev writev readv block ...passed 00:08:54.080 Test: blockdev writev readv size > 128k ...passed 00:08:54.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.080 Test: blockdev comparev and writev ...[2024-11-27 11:52:44.111254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c38000 len:0x1000 00:08:54.080 [2024-11-27 11:52:44.111491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.080 passed 00:08:54.080 Test: blockdev nvme passthru rw ...passed 00:08:54.080 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:52:44.113084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.081 [2024-11-27 11:52:44.113266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.081 passed 00:08:54.081 Test: blockdev nvme admin passthru ...passed 00:08:54.081 Test: blockdev copy ...passed 00:08:54.081 Suite: bdevio tests on: Nvme2n1 00:08:54.081 Test: blockdev write read block ...passed 00:08:54.081 Test: blockdev write zeroes read block ...passed 00:08:54.081 Test: blockdev write zeroes read no split ...passed 00:08:54.341 Test: blockdev write zeroes read split ...passed 00:08:54.341 Test: blockdev write zeroes read split partial ...passed 00:08:54.341 Test: blockdev reset ...[2024-11-27 11:52:44.187015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:54.341 passed 00:08:54.341 Test: blockdev write read 8 blocks ...[2024-11-27 11:52:44.191703] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:54.341 passed 00:08:54.341 Test: blockdev write read size > 128k ...passed 00:08:54.341 Test: blockdev write read invalid size ...passed 00:08:54.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.341 Test: blockdev write read max offset ...passed 00:08:54.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.341 Test: blockdev writev readv 8 blocks ...passed 00:08:54.341 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.341 Test: blockdev writev readv block ...passed 00:08:54.341 Test: blockdev writev readv size > 128k ...passed 00:08:54.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.341 Test: blockdev comparev and writev ...[2024-11-27 11:52:44.200779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7c34000 len:0x1000 00:08:54.341 [2024-11-27 11:52:44.200836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.341 passed 00:08:54.341 Test: blockdev nvme passthru rw ...passed 00:08:54.341 Test: blockdev nvme passthru vendor specific ...[2024-11-27 11:52:44.201605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:54.341 passed 00:08:54.341 Test: blockdev nvme admin passthru ...[2024-11-27 11:52:44.201639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:54.341 passed 00:08:54.341 Test: blockdev copy ...passed 00:08:54.341 Suite: bdevio tests on: Nvme1n1p2 00:08:54.341 Test: blockdev write read block ...passed 00:08:54.341 Test: blockdev write zeroes read block ...passed 00:08:54.341 Test: blockdev write zeroes read no split ...passed 00:08:54.341 Test: blockdev write zeroes read split ...passed 00:08:54.341 Test: blockdev write zeroes read split partial ...passed 00:08:54.341 Test: blockdev reset ...[2024-11-27 11:52:44.294027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:54.341 [2024-11-27 11:52:44.298677] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:54.341 Test: blockdev write read 8 blocks ...uccessful. 00:08:54.341 passed 00:08:54.341 Test: blockdev write read size > 128k ...passed 00:08:54.341 Test: blockdev write read invalid size ...passed 00:08:54.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.341 Test: blockdev write read max offset ...passed 00:08:54.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.341 Test: blockdev writev readv 8 blocks ...passed 00:08:54.341 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.341 Test: blockdev writev readv block ...passed 00:08:54.341 Test: blockdev writev readv size > 128k ...passed 00:08:54.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.341 Test: blockdev comparev and writev ...[2024-11-27 11:52:44.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c7c30000 len:0x1000 00:08:54.341 [2024-11-27 11:52:44.309575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.341 passed 00:08:54.341 Test: blockdev nvme passthru rw ...passed 00:08:54.341 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.341 Test: blockdev nvme admin passthru ...passed 00:08:54.341 Test: blockdev copy ...passed 00:08:54.341 Suite: bdevio tests on: Nvme1n1p1 00:08:54.341 Test: blockdev write read block ...passed 00:08:54.341 Test: blockdev write zeroes read block ...passed 00:08:54.341 Test: blockdev write zeroes read no split ...passed 00:08:54.341 Test: blockdev write zeroes read split ...passed 00:08:54.341 Test: blockdev write zeroes read split partial ...passed 00:08:54.341 Test: blockdev reset ...[2024-11-27 11:52:44.380267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:54.341 [2024-11-27 11:52:44.384628] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:54.341 passed 00:08:54.341 Test: blockdev write read 8 blocks ...passed 00:08:54.341 Test: blockdev write read size > 128k ...passed 00:08:54.341 Test: blockdev write read invalid size ...passed 00:08:54.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.341 Test: blockdev write read max offset ...passed 00:08:54.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.341 Test: blockdev writev readv 8 blocks ...passed 00:08:54.341 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.341 Test: blockdev writev readv block ...passed 00:08:54.601 Test: blockdev writev readv size > 128k ...passed 00:08:54.601 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.602 Test: blockdev comparev and writev ...[2024-11-27 11:52:44.394208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b400e000 len:0x1000 00:08:54.602 [2024-11-27 11:52:44.394254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:54.602 passed 00:08:54.602 Test: blockdev nvme passthru rw ...passed 00:08:54.602 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.602 Test: blockdev nvme admin passthru ...passed 00:08:54.602 Test: blockdev copy ...passed 00:08:54.602 Suite: bdevio tests on: Nvme0n1 00:08:54.602 Test: blockdev write read block ...passed 00:08:54.602 Test: blockdev write zeroes read block ...passed 00:08:54.602 Test: blockdev write zeroes read no split ...passed 00:08:54.602 Test: blockdev write zeroes read split ...passed 00:08:54.602 Test: blockdev write zeroes read split partial ...passed 00:08:54.602 Test: blockdev reset ...[2024-11-27 11:52:44.461074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:54.602 passed 00:08:54.602 Test: blockdev write read 8 blocks ...[2024-11-27 11:52:44.465319] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:54.602 passed 00:08:54.602 Test: blockdev write read size > 128k ...passed 00:08:54.602 Test: blockdev write read invalid size ...passed 00:08:54.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.602 Test: blockdev write read max offset ...passed 00:08:54.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.602 Test: blockdev writev readv 8 blocks ...passed 00:08:54.602 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.602 Test: blockdev writev readv block ...passed 00:08:54.602 Test: blockdev writev readv size > 128k ...passed 00:08:54.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.602 Test: blockdev comparev and writev ...passed 00:08:54.602 Test: blockdev nvme passthru rw ...[2024-11-27 11:52:44.473247] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:54.602 separate metadata which is not supported yet. 00:08:54.602 passed 00:08:54.602 Test: blockdev nvme passthru vendor specific ...passed 00:08:54.602 Test: blockdev nvme admin passthru ...[2024-11-27 11:52:44.473914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:54.602 [2024-11-27 11:52:44.473962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:54.602 passed 00:08:54.602 Test: blockdev copy ...passed 00:08:54.602 00:08:54.602 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.602 suites 7 7 n/a 0 0 00:08:54.602 tests 161 161 161 0 0 00:08:54.602 asserts 1025 1025 1025 0 n/a 00:08:54.602 00:08:54.602 Elapsed time = 1.719 seconds 00:08:54.602 0 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62511 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62511 ']' 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62511 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62511 00:08:54.602 killing process with pid 62511 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62511' 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62511 00:08:54.602 11:52:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62511 00:08:55.540 11:52:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:55.540 00:08:55.540 real 0m2.883s 00:08:55.540 user 0m7.340s 00:08:55.540 sys 0m0.438s 00:08:55.540 11:52:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.540 11:52:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:55.540 ************************************ 00:08:55.540 END TEST bdev_bounds 00:08:55.540 ************************************ 00:08:55.800 11:52:45 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.800 11:52:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.800 11:52:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.800 11:52:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.800 ************************************ 00:08:55.800 START TEST bdev_nbd 00:08:55.800 ************************************ 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:55.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62566 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62566 /var/tmp/spdk-nbd.sock 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62566 ']' 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.800 11:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:55.800 [2024-11-27 11:52:45.765171] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:55.800 [2024-11-27 11:52:45.765433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.059 [2024-11-27 11:52:45.949744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.059 [2024-11-27 11:52:46.057396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.998 11:52:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.998 1+0 records in 00:08:56.998 1+0 records out 00:08:56.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629374 s, 6.5 MB/s 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.998 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.258 1+0 records in 00:08:57.258 1+0 records out 00:08:57.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701004 s, 5.8 MB/s 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.258 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.518 1+0 records in 00:08:57.518 1+0 records out 00:08:57.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621954 s, 6.6 MB/s 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.518 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:57.777 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.778 1+0 records in 00:08:57.778 1+0 records out 00:08:57.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102389 s, 4.0 MB/s 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:57.778 11:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:58.037 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:58.037 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.297 1+0 records in 00:08:58.297 1+0 records out 00:08:58.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800984 s, 5.1 MB/s 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:58.297 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.556 1+0 records in 00:08:58.556 1+0 records out 00:08:58.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676103 s, 6.1 MB/s 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.556 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:58.557 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.557 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.557 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.816 1+0 records in 00:08:58.816 1+0 records out 00:08:58.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753883 s, 5.4 MB/s 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:58.816 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd0", 00:08:59.075 "bdev_name": "Nvme0n1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd1", 00:08:59.075 "bdev_name": "Nvme1n1p1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd2", 00:08:59.075 "bdev_name": "Nvme1n1p2" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd3", 00:08:59.075 "bdev_name": "Nvme2n1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd4", 00:08:59.075 "bdev_name": "Nvme2n2" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd5", 00:08:59.075 "bdev_name": "Nvme2n3" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd6", 00:08:59.075 "bdev_name": "Nvme3n1" 00:08:59.075 } 00:08:59.075 ]' 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd0", 00:08:59.075 "bdev_name": "Nvme0n1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd1", 00:08:59.075 "bdev_name": "Nvme1n1p1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd2", 00:08:59.075 "bdev_name": "Nvme1n1p2" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd3", 00:08:59.075 "bdev_name": "Nvme2n1" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd4", 00:08:59.075 "bdev_name": "Nvme2n2" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd5", 00:08:59.075 "bdev_name": "Nvme2n3" 00:08:59.075 }, 00:08:59.075 { 00:08:59.075 "nbd_device": "/dev/nbd6", 00:08:59.075 "bdev_name": "Nvme3n1" 00:08:59.075 } 00:08:59.075 ]' 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.075 11:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:59.075 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.334 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.592 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.851 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.111 11:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.370 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.629 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.630 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.630 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:00.914 /dev/nbd0 00:09:00.914 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.915 1+0 records in 00:09:00.915 1+0 records out 00:09:00.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627189 s, 6.5 MB/s 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:00.915 11:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:01.172 /dev/nbd1 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.172 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.172 1+0 records in 00:09:01.172 1+0 records out 00:09:01.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700616 s, 5.8 MB/s 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.173 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:01.431 /dev/nbd10 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.431 1+0 records in 00:09:01.431 1+0 records out 00:09:01.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759144 s, 5.4 MB/s 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.431 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:01.689 /dev/nbd11 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.689 1+0 records in 00:09:01.689 1+0 records out 00:09:01.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448527 s, 9.1 MB/s 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.689 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:01.946 /dev/nbd12 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.946 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.947 1+0 records in 00:09:01.947 1+0 records out 00:09:01.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776722 s, 5.3 MB/s 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:01.947 11:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:02.205 /dev/nbd13 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.205 1+0 records in 00:09:02.205 1+0 records out 00:09:02.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927575 s, 4.4 MB/s 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.205 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:02.465 /dev/nbd14 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.465 1+0 records in 00:09:02.465 1+0 records out 00:09:02.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00235807 s, 1.7 MB/s 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.465 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.724 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd0", 00:09:02.724 "bdev_name": "Nvme0n1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd1", 00:09:02.724 "bdev_name": "Nvme1n1p1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd10", 00:09:02.724 "bdev_name": "Nvme1n1p2" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd11", 00:09:02.724 "bdev_name": "Nvme2n1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd12", 00:09:02.724 "bdev_name": "Nvme2n2" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd13", 00:09:02.724 "bdev_name": "Nvme2n3" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd14", 00:09:02.724 "bdev_name": "Nvme3n1" 00:09:02.724 } 00:09:02.724 ]' 00:09:02.724 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd0", 00:09:02.724 "bdev_name": "Nvme0n1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd1", 00:09:02.724 "bdev_name": "Nvme1n1p1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd10", 00:09:02.724 "bdev_name": "Nvme1n1p2" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd11", 00:09:02.724 "bdev_name": "Nvme2n1" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd12", 00:09:02.724 "bdev_name": "Nvme2n2" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd13", 00:09:02.724 "bdev_name": "Nvme2n3" 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "nbd_device": "/dev/nbd14", 00:09:02.724 "bdev_name": "Nvme3n1" 00:09:02.724 } 00:09:02.724 ]' 00:09:02.724 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.724 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:02.724 /dev/nbd1 00:09:02.724 /dev/nbd10 00:09:02.724 /dev/nbd11 00:09:02.724 /dev/nbd12 00:09:02.724 /dev/nbd13 00:09:02.724 /dev/nbd14' 00:09:02.724 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:02.724 /dev/nbd1 00:09:02.724 /dev/nbd10 00:09:02.724 /dev/nbd11 00:09:02.724 /dev/nbd12 00:09:02.725 /dev/nbd13 00:09:02.725 /dev/nbd14' 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:02.725 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:02.985 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:02.985 256+0 records in 00:09:02.985 256+0 records out 00:09:02.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682531 s, 154 MB/s 00:09:02.985 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.985 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:02.985 256+0 records in 00:09:02.985 256+0 records out 00:09:02.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138429 s, 7.6 MB/s 00:09:02.985 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.985 11:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.244 256+0 records in 00:09:03.244 256+0 records out 00:09:03.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149929 s, 7.0 MB/s 00:09:03.244 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.244 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:03.244 256+0 records in 00:09:03.244 256+0 records out 00:09:03.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150385 s, 7.0 MB/s 00:09:03.245 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.245 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:03.504 256+0 records in 00:09:03.504 256+0 records out 00:09:03.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156228 s, 6.7 MB/s 00:09:03.504 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.504 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:03.763 256+0 records in 00:09:03.763 256+0 records out 00:09:03.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149078 s, 7.0 MB/s 00:09:03.763 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.763 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:03.763 256+0 records in 00:09:03.763 256+0 records out 00:09:03.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153904 s, 6.8 MB/s 00:09:03.763 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.763 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:04.022 256+0 records in 00:09:04.022 256+0 records out 00:09:04.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150613 s, 7.0 MB/s 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.023 11:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.281 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.282 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.282 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.282 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.282 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.540 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.799 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.058 11:52:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.058 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.317 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.576 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:05.836 11:52:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:06.096 malloc_lvol_verify 00:09:06.096 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:06.355 4f462e0e-e8a1-493a-883e-3c4e9f6337f3 00:09:06.355 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:06.615 4bcdc8b8-4feb-471b-b2b4-7da67756f8e3 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:06.615 /dev/nbd0 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:06.615 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:06.615 mke2fs 1.47.0 (5-Feb-2023) 00:09:06.615 Discarding device blocks: 0/4096 done 00:09:06.615 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:06.615 00:09:06.615 Allocating group tables: 0/1 done 00:09:06.615 Writing inode tables: 0/1 done 00:09:06.615 Creating journal (1024 blocks): done 00:09:06.874 Writing superblocks and filesystem accounting information: 0/1 done 00:09:06.874 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62566 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62566 ']' 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62566 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.874 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62566 00:09:07.134 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.134 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.134 killing process with pid 62566 00:09:07.134 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62566' 00:09:07.134 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62566 00:09:07.134 11:52:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62566 00:09:08.082 11:52:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:08.082 00:09:08.082 real 0m12.459s 00:09:08.082 user 0m15.802s 00:09:08.082 sys 0m5.463s 00:09:08.082 11:52:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.082 11:52:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:08.082 ************************************ 00:09:08.082 END TEST bdev_nbd 00:09:08.082 ************************************ 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:08.342 skipping fio tests on NVMe due to multi-ns failures. 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:08.342 11:52:58 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:08.342 11:52:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:08.342 11:52:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.342 11:52:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.342 ************************************ 00:09:08.342 START TEST bdev_verify 00:09:08.342 ************************************ 00:09:08.342 11:52:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:08.342 [2024-11-27 11:52:58.309841] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:08.342 [2024-11-27 11:52:58.309978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63001 ] 00:09:08.602 [2024-11-27 11:52:58.499590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.602 [2024-11-27 11:52:58.616428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.602 [2024-11-27 11:52:58.616432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.539 Running I/O for 5 seconds... 00:09:11.856 14976.00 IOPS, 58.50 MiB/s [2024-11-27T11:53:02.847Z] 16480.00 IOPS, 64.38 MiB/s [2024-11-27T11:53:03.785Z] 16874.67 IOPS, 65.92 MiB/s [2024-11-27T11:53:04.722Z] 16944.00 IOPS, 66.19 MiB/s [2024-11-27T11:53:04.722Z] 16844.80 IOPS, 65.80 MiB/s 00:09:14.669 Latency(us) 00:09:14.669 [2024-11-27T11:53:04.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.669 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0xbd0bd 00:09:14.669 Nvme0n1 : 5.11 1014.69 3.96 0.00 0.00 125231.76 13686.23 104436.49 00:09:14.669 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:14.669 Nvme0n1 : 5.05 1342.54 5.24 0.00 0.00 95062.22 19897.68 113701.01 00:09:14.669 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x4ff80 00:09:14.669 Nvme1n1p1 : 5.13 1023.59 4.00 0.00 0.00 124297.80 14107.35 91803.04 00:09:14.669 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:14.669 Nvme1n1p1 : 5.05 1342.14 5.24 0.00 0.00 94970.26 20318.79 110332.09 00:09:14.669 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x4ff7f 00:09:14.669 Nvme1n1p2 : 5.13 1023.32 4.00 0.00 0.00 123999.01 14107.35 99383.11 00:09:14.669 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:14.669 Nvme1n1p2 : 5.06 1341.73 5.24 0.00 0.00 94636.95 19371.28 104015.37 00:09:14.669 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x80000 00:09:14.669 Nvme2n1 : 5.13 1022.83 4.00 0.00 0.00 123805.36 15686.53 96856.42 00:09:14.669 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x80000 length 0x80000 00:09:14.669 Nvme2n1 : 5.08 1348.93 5.27 0.00 0.00 93865.13 5395.53 89276.35 00:09:14.669 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x80000 00:09:14.669 Nvme2n2 : 5.13 1022.61 3.99 0.00 0.00 123604.80 15686.53 96014.19 00:09:14.669 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x80000 length 0x80000 00:09:14.669 Nvme2n2 : 5.09 1357.04 5.30 0.00 0.00 93254.38 13317.76 83801.86 00:09:14.669 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x80000 00:09:14.669 Nvme2n3 : 5.13 1022.37 3.99 0.00 0.00 123374.46 15475.97 94329.73 00:09:14.669 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x80000 length 0x80000 00:09:14.669 Nvme2n3 : 5.09 1356.67 5.30 0.00 0.00 93128.62 13317.76 84222.97 00:09:14.669 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x0 length 0x20000 00:09:14.669 Nvme3n1 : 5.13 1022.13 3.99 0.00 0.00 123284.18 15265.41 98119.76 00:09:14.669 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:14.669 Verification LBA range: start 0x20000 length 0x20000 00:09:14.669 Nvme3n1 : 5.10 1356.30 5.30 0.00 0.00 93050.73 12054.41 85065.20 00:09:14.669 [2024-11-27T11:53:04.722Z] =================================================================================================================== 00:09:14.669 [2024-11-27T11:53:04.722Z] Total : 16596.87 64.83 0.00 0.00 106970.59 5395.53 113701.01 00:09:16.046 00:09:16.046 real 0m7.615s 00:09:16.046 user 0m13.970s 00:09:16.046 sys 0m0.355s 00:09:16.046 11:53:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.046 11:53:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:16.046 ************************************ 00:09:16.046 END TEST bdev_verify 00:09:16.046 ************************************ 00:09:16.046 11:53:05 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:16.046 11:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:16.046 11:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.046 11:53:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:16.046 ************************************ 00:09:16.046 START TEST bdev_verify_big_io 00:09:16.046 ************************************ 00:09:16.046 11:53:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:16.046 [2024-11-27 11:53:06.000897] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:16.046 [2024-11-27 11:53:06.001033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63099 ] 00:09:16.305 [2024-11-27 11:53:06.188116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.305 [2024-11-27 11:53:06.297404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.305 [2024-11-27 11:53:06.297467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.243 Running I/O for 5 seconds... 00:09:22.815 2836.00 IOPS, 177.25 MiB/s [2024-11-27T11:53:13.437Z] 3978.50 IOPS, 248.66 MiB/s [2024-11-27T11:53:13.697Z] 4430.00 IOPS, 276.88 MiB/s 00:09:23.644 Latency(us) 00:09:23.644 [2024-11-27T11:53:13.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0xbd0b 00:09:23.644 Nvme0n1 : 5.70 84.42 5.28 0.00 0.00 1445307.13 18107.94 1583391.87 00:09:23.644 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:23.644 Nvme0n1 : 5.47 196.03 12.25 0.00 0.00 638210.07 26109.12 629987.83 00:09:23.644 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x4ff8 00:09:23.644 Nvme1n1p1 : 5.70 93.00 5.81 0.00 0.00 1249376.68 52849.91 1266713.50 00:09:23.644 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:23.644 Nvme1n1p1 : 5.50 198.56 12.41 0.00 0.00 618514.57 64009.46 589560.80 00:09:23.644 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x4ff7 00:09:23.644 Nvme1n1p2 : 5.81 106.19 6.64 0.00 0.00 1059235.43 38532.01 1246499.98 00:09:23.644 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:23.644 Nvme1n1p2 : 5.54 203.06 12.69 0.00 0.00 598266.06 23056.04 650201.34 00:09:23.644 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x8000 00:09:23.644 Nvme2n1 : 5.84 106.46 6.65 0.00 0.00 1021532.91 29688.60 2317816.19 00:09:23.644 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x8000 length 0x8000 00:09:23.644 Nvme2n1 : 5.54 203.91 12.74 0.00 0.00 587606.15 23792.99 609774.32 00:09:23.644 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x8000 00:09:23.644 Nvme2n2 : 6.03 139.69 8.73 0.00 0.00 748398.19 26846.07 2358243.21 00:09:23.644 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x8000 length 0x8000 00:09:23.644 Nvme2n2 : 5.54 207.98 13.00 0.00 0.00 569050.10 32636.40 623249.99 00:09:23.644 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x8000 00:09:23.644 Nvme2n3 : 6.24 195.85 12.24 0.00 0.00 519185.68 14317.91 2169583.76 00:09:23.644 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x8000 length 0x8000 00:09:23.644 Nvme2n3 : 5.54 207.89 12.99 0.00 0.00 559603.25 33478.63 636725.67 00:09:23.644 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x0 length 0x2000 00:09:23.644 Nvme3n1 : 6.35 257.76 16.11 0.00 0.00 382173.71 470.46 2425621.59 00:09:23.644 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.644 Verification LBA range: start 0x2000 length 0x2000 00:09:23.644 Nvme3n1 : 5.59 224.53 14.03 0.00 0.00 511268.69 7369.51 646832.42 00:09:23.644 [2024-11-27T11:53:13.697Z] =================================================================================================================== 00:09:23.644 [2024-11-27T11:53:13.697Z] Total : 2425.33 151.58 0.00 0.00 658675.28 470.46 2425621.59 00:09:25.551 00:09:25.551 real 0m9.573s 00:09:25.551 user 0m17.895s 00:09:25.551 sys 0m0.370s 00:09:25.551 11:53:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.551 ************************************ 00:09:25.551 END TEST bdev_verify_big_io 00:09:25.551 11:53:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 ************************************ 00:09:25.551 11:53:15 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.551 11:53:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:25.551 11:53:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.551 11:53:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:25.551 ************************************ 00:09:25.551 START TEST bdev_write_zeroes 00:09:25.551 ************************************ 00:09:25.551 11:53:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:25.811 [2024-11-27 11:53:15.651460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:25.811 [2024-11-27 11:53:15.651570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63221 ] 00:09:25.811 [2024-11-27 11:53:15.834422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.070 [2024-11-27 11:53:15.948270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.697 Running I/O for 1 seconds... 00:09:27.657 68992.00 IOPS, 269.50 MiB/s 00:09:27.657 Latency(us) 00:09:27.657 [2024-11-27T11:53:17.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.657 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme0n1 : 1.02 9833.77 38.41 0.00 0.00 12982.24 10896.35 28425.25 00:09:27.657 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme1n1p1 : 1.02 9823.30 38.37 0.00 0.00 12978.70 10896.35 28214.70 00:09:27.657 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme1n1p2 : 1.02 9813.17 38.33 0.00 0.00 12958.90 10843.71 27372.47 00:09:27.657 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme2n1 : 1.02 9804.35 38.30 0.00 0.00 12904.97 11159.54 25056.33 00:09:27.657 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme2n2 : 1.03 9843.56 38.45 0.00 0.00 12840.71 7211.59 21476.86 00:09:27.657 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme2n3 : 1.03 9792.97 38.25 0.00 0.00 12857.94 10633.15 21266.30 00:09:27.657 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:27.657 Nvme3n1 : 1.03 9834.93 38.42 0.00 0.00 12792.98 6711.52 22529.64 00:09:27.657 [2024-11-27T11:53:17.710Z] =================================================================================================================== 00:09:27.657 [2024-11-27T11:53:17.710Z] Total : 68746.05 268.54 0.00 0.00 12902.19 6711.52 28425.25 00:09:29.036 00:09:29.036 real 0m3.228s 00:09:29.036 user 0m2.833s 00:09:29.036 sys 0m0.276s 00:09:29.036 11:53:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.036 ************************************ 00:09:29.036 END TEST bdev_write_zeroes 00:09:29.036 11:53:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:29.036 ************************************ 00:09:29.036 11:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.036 11:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:29.036 11:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.036 11:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:29.036 ************************************ 00:09:29.036 START TEST bdev_json_nonenclosed 00:09:29.036 ************************************ 00:09:29.036 11:53:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.036 [2024-11-27 11:53:18.956679] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:29.036 [2024-11-27 11:53:18.956807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:09:29.295 [2024-11-27 11:53:19.142118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.295 [2024-11-27 11:53:19.252093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.296 [2024-11-27 11:53:19.252186] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:29.296 [2024-11-27 11:53:19.252207] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:29.296 [2024-11-27 11:53:19.252219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.555 00:09:29.555 real 0m0.639s 00:09:29.555 user 0m0.381s 00:09:29.555 sys 0m0.154s 00:09:29.555 ************************************ 00:09:29.555 END TEST bdev_json_nonenclosed 00:09:29.555 ************************************ 00:09:29.555 11:53:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.555 11:53:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:29.555 11:53:19 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.555 11:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:29.555 11:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.555 11:53:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:29.555 ************************************ 00:09:29.555 START TEST bdev_json_nonarray 00:09:29.555 ************************************ 00:09:29.555 11:53:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:29.814 [2024-11-27 11:53:19.674680] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:29.814 [2024-11-27 11:53:19.674802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63305 ] 00:09:29.814 [2024-11-27 11:53:19.859448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.074 [2024-11-27 11:53:19.968010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.074 [2024-11-27 11:53:19.968127] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:30.074 [2024-11-27 11:53:19.968149] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:30.074 [2024-11-27 11:53:19.968161] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.334 00:09:30.334 real 0m0.645s 00:09:30.334 user 0m0.390s 00:09:30.334 sys 0m0.150s 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 ************************************ 00:09:30.334 END TEST bdev_json_nonarray 00:09:30.334 ************************************ 00:09:30.334 11:53:20 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:30.334 11:53:20 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:30.334 11:53:20 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:30.334 11:53:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.334 11:53:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.334 11:53:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.334 ************************************ 00:09:30.334 START TEST bdev_gpt_uuid 00:09:30.334 ************************************ 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63336 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63336 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63336 ']' 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.334 11:53:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:30.594 [2024-11-27 11:53:20.412075] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:30.594 [2024-11-27 11:53:20.412195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63336 ] 00:09:30.594 [2024-11-27 11:53:20.593861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.853 [2024-11-27 11:53:20.698021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.792 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.792 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:31.792 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:31.792 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.792 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:32.052 Some configs were skipped because the RPC state that can call them passed over. 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:32.052 { 00:09:32.052 "name": "Nvme1n1p1", 00:09:32.052 "aliases": [ 00:09:32.052 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:32.052 ], 00:09:32.052 "product_name": "GPT Disk", 00:09:32.052 "block_size": 4096, 00:09:32.052 "num_blocks": 655104, 00:09:32.052 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:32.052 "assigned_rate_limits": { 00:09:32.052 "rw_ios_per_sec": 0, 00:09:32.052 "rw_mbytes_per_sec": 0, 00:09:32.052 "r_mbytes_per_sec": 0, 00:09:32.052 "w_mbytes_per_sec": 0 00:09:32.052 }, 00:09:32.052 "claimed": false, 00:09:32.052 "zoned": false, 00:09:32.052 "supported_io_types": { 00:09:32.052 "read": true, 00:09:32.052 "write": true, 00:09:32.052 "unmap": true, 00:09:32.052 "flush": true, 00:09:32.052 "reset": true, 00:09:32.052 "nvme_admin": false, 00:09:32.052 "nvme_io": false, 00:09:32.052 "nvme_io_md": false, 00:09:32.052 "write_zeroes": true, 00:09:32.052 "zcopy": false, 00:09:32.052 "get_zone_info": false, 00:09:32.052 "zone_management": false, 00:09:32.052 "zone_append": false, 00:09:32.052 "compare": true, 00:09:32.052 "compare_and_write": false, 00:09:32.052 "abort": true, 00:09:32.052 "seek_hole": false, 00:09:32.052 "seek_data": false, 00:09:32.052 "copy": true, 00:09:32.052 "nvme_iov_md": false 00:09:32.052 }, 00:09:32.052 "driver_specific": { 00:09:32.052 "gpt": { 00:09:32.052 "base_bdev": "Nvme1n1", 00:09:32.052 "offset_blocks": 256, 00:09:32.052 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:32.052 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:32.052 "partition_name": "SPDK_TEST_first" 00:09:32.052 } 00:09:32.052 } 00:09:32.052 } 00:09:32.052 ]' 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:32.052 11:53:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:32.052 { 00:09:32.052 "name": "Nvme1n1p2", 00:09:32.052 "aliases": [ 00:09:32.052 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:32.052 ], 00:09:32.052 "product_name": "GPT Disk", 00:09:32.052 "block_size": 4096, 00:09:32.052 "num_blocks": 655103, 00:09:32.052 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:32.052 "assigned_rate_limits": { 00:09:32.052 "rw_ios_per_sec": 0, 00:09:32.052 "rw_mbytes_per_sec": 0, 00:09:32.052 "r_mbytes_per_sec": 0, 00:09:32.052 "w_mbytes_per_sec": 0 00:09:32.052 }, 00:09:32.052 "claimed": false, 00:09:32.052 "zoned": false, 00:09:32.052 "supported_io_types": { 00:09:32.052 "read": true, 00:09:32.052 "write": true, 00:09:32.052 "unmap": true, 00:09:32.052 "flush": true, 00:09:32.052 "reset": true, 00:09:32.052 "nvme_admin": false, 00:09:32.052 "nvme_io": false, 00:09:32.052 "nvme_io_md": false, 00:09:32.052 "write_zeroes": true, 00:09:32.052 "zcopy": false, 00:09:32.052 "get_zone_info": false, 00:09:32.052 "zone_management": false, 00:09:32.052 "zone_append": false, 00:09:32.052 "compare": true, 00:09:32.052 "compare_and_write": false, 00:09:32.052 "abort": true, 00:09:32.052 "seek_hole": false, 00:09:32.052 "seek_data": false, 00:09:32.052 "copy": true, 00:09:32.052 "nvme_iov_md": false 00:09:32.052 }, 00:09:32.052 "driver_specific": { 00:09:32.052 "gpt": { 00:09:32.052 "base_bdev": "Nvme1n1", 00:09:32.052 "offset_blocks": 655360, 00:09:32.052 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:32.052 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:32.052 "partition_name": "SPDK_TEST_second" 00:09:32.052 } 00:09:32.052 } 00:09:32.052 } 00:09:32.052 ]' 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:32.052 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63336 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63336 ']' 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63336 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63336 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.312 killing process with pid 63336 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63336' 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63336 00:09:32.312 11:53:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63336 00:09:34.851 00:09:34.851 real 0m4.189s 00:09:34.851 user 0m4.239s 00:09:34.851 sys 0m0.573s 00:09:34.851 11:53:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.851 11:53:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:34.851 ************************************ 00:09:34.851 END TEST bdev_gpt_uuid 00:09:34.851 ************************************ 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:34.851 11:53:24 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.371 Waiting for block devices as requested 00:09:35.630 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.630 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.889 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.889 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.168 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.168 11:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:41.168 11:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:41.168 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:41.168 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:41.168 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:41.168 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:41.168 11:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:41.168 00:09:41.168 real 1m5.166s 00:09:41.168 user 1m20.623s 00:09:41.168 sys 0m12.474s 00:09:41.168 11:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.168 11:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:41.168 ************************************ 00:09:41.168 END TEST blockdev_nvme_gpt 00:09:41.168 ************************************ 00:09:41.428 11:53:31 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:41.428 11:53:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.428 11:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.428 11:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:41.428 ************************************ 00:09:41.428 START TEST nvme 00:09:41.428 ************************************ 00:09:41.428 11:53:31 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:41.428 * Looking for test storage... 00:09:41.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.428 11:53:31 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.428 11:53:31 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.428 11:53:31 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.428 11:53:31 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.428 11:53:31 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.428 11:53:31 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.428 11:53:31 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.428 11:53:31 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.428 11:53:31 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.428 11:53:31 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.428 11:53:31 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.428 11:53:31 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.428 11:53:31 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.428 11:53:31 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:41.428 11:53:31 nvme -- scripts/common.sh@345 -- # : 1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.428 11:53:31 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.428 11:53:31 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@353 -- # local d=1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.428 11:53:31 nvme -- scripts/common.sh@355 -- # echo 1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.428 11:53:31 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:41.687 11:53:31 nvme -- scripts/common.sh@353 -- # local d=2 00:09:41.687 11:53:31 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.687 11:53:31 nvme -- scripts/common.sh@355 -- # echo 2 00:09:41.688 11:53:31 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.688 11:53:31 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.688 11:53:31 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.688 11:53:31 nvme -- scripts/common.sh@368 -- # return 0 00:09:41.688 11:53:31 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.688 11:53:31 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.688 --rc genhtml_branch_coverage=1 00:09:41.688 --rc genhtml_function_coverage=1 00:09:41.688 --rc genhtml_legend=1 00:09:41.688 --rc geninfo_all_blocks=1 00:09:41.688 --rc geninfo_unexecuted_blocks=1 00:09:41.688 00:09:41.688 ' 00:09:41.688 11:53:31 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.688 --rc genhtml_branch_coverage=1 00:09:41.688 --rc genhtml_function_coverage=1 00:09:41.688 --rc genhtml_legend=1 00:09:41.688 --rc geninfo_all_blocks=1 00:09:41.688 --rc geninfo_unexecuted_blocks=1 00:09:41.688 00:09:41.688 ' 00:09:41.688 11:53:31 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.688 --rc genhtml_branch_coverage=1 00:09:41.688 --rc genhtml_function_coverage=1 00:09:41.688 --rc genhtml_legend=1 00:09:41.688 --rc geninfo_all_blocks=1 00:09:41.688 --rc geninfo_unexecuted_blocks=1 00:09:41.688 00:09:41.688 ' 00:09:41.688 11:53:31 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.688 --rc genhtml_branch_coverage=1 00:09:41.688 --rc genhtml_function_coverage=1 00:09:41.688 --rc genhtml_legend=1 00:09:41.688 --rc geninfo_all_blocks=1 00:09:41.688 --rc geninfo_unexecuted_blocks=1 00:09:41.688 00:09:41.688 ' 00:09:41.688 11:53:31 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.196 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.196 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.196 11:53:33 nvme -- nvme/nvme.sh@79 -- # uname 00:09:43.196 11:53:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:43.196 11:53:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:43.196 11:53:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1075 -- # stubpid=63995 00:09:43.196 Waiting for stub to ready for secondary processes... 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63995 ]] 00:09:43.196 11:53:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:43.196 [2024-11-27 11:53:33.236188] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:43.196 [2024-11-27 11:53:33.236321] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:44.135 11:53:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:44.135 11:53:34 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63995 ]] 00:09:44.135 11:53:34 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:45.073 [2024-11-27 11:53:34.964601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.073 [2024-11-27 11:53:35.084645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.073 [2024-11-27 11:53:35.084812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.073 [2024-11-27 11:53:35.084813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.073 [2024-11-27 11:53:35.104041] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:45.073 [2024-11-27 11:53:35.104078] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.073 [2024-11-27 11:53:35.120643] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:45.073 [2024-11-27 11:53:35.121430] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:45.332 [2024-11-27 11:53:35.131869] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.332 [2024-11-27 11:53:35.132295] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:45.333 [2024-11-27 11:53:35.132507] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:45.333 [2024-11-27 11:53:35.138330] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.333 [2024-11-27 11:53:35.138739] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:45.333 [2024-11-27 11:53:35.138927] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:45.333 [2024-11-27 11:53:35.143639] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.333 [2024-11-27 11:53:35.143914] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:45.333 [2024-11-27 11:53:35.144058] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:45.333 [2024-11-27 11:53:35.144166] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:45.333 [2024-11-27 11:53:35.144257] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:45.333 11:53:35 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:45.333 11:53:35 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:45.333 done. 00:09:45.333 11:53:35 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:45.333 11:53:35 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:45.333 11:53:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.333 11:53:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.333 ************************************ 00:09:45.333 START TEST nvme_reset 00:09:45.333 ************************************ 00:09:45.333 11:53:35 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:45.592 Initializing NVMe Controllers 00:09:45.592 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:45.592 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:45.592 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:45.592 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:45.592 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:45.592 00:09:45.592 real 0m0.301s 00:09:45.592 user 0m0.100s 00:09:45.592 sys 0m0.153s 00:09:45.592 11:53:35 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.592 ************************************ 00:09:45.592 END TEST nvme_reset 00:09:45.592 11:53:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:45.592 ************************************ 00:09:45.592 11:53:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:45.592 11:53:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.592 11:53:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.592 11:53:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.592 ************************************ 00:09:45.592 START TEST nvme_identify 00:09:45.592 ************************************ 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:45.592 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:45.592 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:45.592 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:45.592 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:45.592 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:45.851 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:45.851 11:53:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.851 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:46.113 [2024-11-27 11:53:35.902984] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64023 terminated unexpected 00:09:46.113 ===================================================== 00:09:46.113 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:46.113 ===================================================== 00:09:46.113 Controller Capabilities/Features 00:09:46.113 ================================ 00:09:46.113 Vendor ID: 1b36 00:09:46.113 Subsystem Vendor ID: 1af4 00:09:46.113 Serial Number: 12340 00:09:46.113 Model Number: QEMU NVMe Ctrl 00:09:46.113 Firmware Version: 8.0.0 00:09:46.113 Recommended Arb Burst: 6 00:09:46.113 IEEE OUI Identifier: 00 54 52 00:09:46.113 Multi-path I/O 00:09:46.113 May have multiple subsystem ports: No 00:09:46.113 May have multiple controllers: No 00:09:46.113 Associated with SR-IOV VF: No 00:09:46.113 Max Data Transfer Size: 524288 00:09:46.113 Max Number of Namespaces: 256 00:09:46.113 Max Number of I/O Queues: 64 00:09:46.113 NVMe Specification Version (VS): 1.4 00:09:46.113 NVMe Specification Version (Identify): 1.4 00:09:46.113 Maximum Queue Entries: 2048 00:09:46.113 Contiguous Queues Required: Yes 00:09:46.113 Arbitration Mechanisms Supported 00:09:46.113 Weighted Round Robin: Not Supported 00:09:46.113 Vendor Specific: Not Supported 00:09:46.113 Reset Timeout: 7500 ms 00:09:46.113 Doorbell Stride: 4 bytes 00:09:46.113 NVM Subsystem Reset: Not Supported 00:09:46.113 Command Sets Supported 00:09:46.113 NVM Command Set: Supported 00:09:46.113 Boot Partition: Not Supported 00:09:46.113 Memory Page Size Minimum: 4096 bytes 00:09:46.113 Memory Page Size Maximum: 65536 bytes 00:09:46.113 Persistent Memory Region: Not Supported 00:09:46.113 Optional Asynchronous Events Supported 00:09:46.113 Namespace Attribute Notices: Supported 00:09:46.113 Firmware Activation Notices: Not Supported 00:09:46.113 ANA Change Notices: Not Supported 00:09:46.113 PLE Aggregate Log Change Notices: Not Supported 00:09:46.113 LBA Status Info Alert Notices: Not Supported 00:09:46.113 EGE Aggregate Log Change Notices: Not Supported 00:09:46.113 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.113 Zone Descriptor Change Notices: Not Supported 00:09:46.113 Discovery Log Change Notices: Not Supported 00:09:46.113 Controller Attributes 00:09:46.113 128-bit Host Identifier: Not Supported 00:09:46.113 Non-Operational Permissive Mode: Not Supported 00:09:46.113 NVM Sets: Not Supported 00:09:46.113 Read Recovery Levels: Not Supported 00:09:46.113 Endurance Groups: Not Supported 00:09:46.113 Predictable Latency Mode: Not Supported 00:09:46.113 Traffic Based Keep ALive: Not Supported 00:09:46.113 Namespace Granularity: Not Supported 00:09:46.113 SQ Associations: Not Supported 00:09:46.113 UUID List: Not Supported 00:09:46.113 Multi-Domain Subsystem: Not Supported 00:09:46.113 Fixed Capacity Management: Not Supported 00:09:46.113 Variable Capacity Management: Not Supported 00:09:46.113 Delete Endurance Group: Not Supported 00:09:46.113 Delete NVM Set: Not Supported 00:09:46.113 Extended LBA Formats Supported: Supported 00:09:46.113 Flexible Data Placement Supported: Not Supported 00:09:46.113 00:09:46.113 Controller Memory Buffer Support 00:09:46.113 ================================ 00:09:46.113 Supported: No 00:09:46.113 00:09:46.113 Persistent Memory Region Support 00:09:46.113 ================================ 00:09:46.113 Supported: No 00:09:46.113 00:09:46.113 Admin Command Set Attributes 00:09:46.113 ============================ 00:09:46.113 Security Send/Receive: Not Supported 00:09:46.113 Format NVM: Supported 00:09:46.113 Firmware Activate/Download: Not Supported 00:09:46.113 Namespace Management: Supported 00:09:46.113 Device Self-Test: Not Supported 00:09:46.113 Directives: Supported 00:09:46.113 NVMe-MI: Not Supported 00:09:46.113 Virtualization Management: Not Supported 00:09:46.113 Doorbell Buffer Config: Supported 00:09:46.113 Get LBA Status Capability: Not Supported 00:09:46.113 Command & Feature Lockdown Capability: Not Supported 00:09:46.113 Abort Command Limit: 4 00:09:46.113 Async Event Request Limit: 4 00:09:46.113 Number of Firmware Slots: N/A 00:09:46.113 Firmware Slot 1 Read-Only: N/A 00:09:46.113 Firmware Activation Without Reset: N/A 00:09:46.113 Multiple Update Detection Support: N/A 00:09:46.113 Firmware Update Granularity: No Information Provided 00:09:46.113 Per-Namespace SMART Log: Yes 00:09:46.113 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.113 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:46.113 Command Effects Log Page: Supported 00:09:46.113 Get Log Page Extended Data: Supported 00:09:46.113 Telemetry Log Pages: Not Supported 00:09:46.113 Persistent Event Log Pages: Not Supported 00:09:46.113 Supported Log Pages Log Page: May Support 00:09:46.113 Commands Supported & Effects Log Page: Not Supported 00:09:46.113 Feature Identifiers & Effects Log Page:May Support 00:09:46.113 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.113 Data Area 4 for Telemetry Log: Not Supported 00:09:46.113 Error Log Page Entries Supported: 1 00:09:46.113 Keep Alive: Not Supported 00:09:46.113 00:09:46.113 NVM Command Set Attributes 00:09:46.113 ========================== 00:09:46.113 Submission Queue Entry Size 00:09:46.113 Max: 64 00:09:46.113 Min: 64 00:09:46.113 Completion Queue Entry Size 00:09:46.113 Max: 16 00:09:46.113 Min: 16 00:09:46.113 Number of Namespaces: 256 00:09:46.113 Compare Command: Supported 00:09:46.113 Write Uncorrectable Command: Not Supported 00:09:46.113 Dataset Management Command: Supported 00:09:46.113 Write Zeroes Command: Supported 00:09:46.113 Set Features Save Field: Supported 00:09:46.113 Reservations: Not Supported 00:09:46.114 Timestamp: Supported 00:09:46.114 Copy: Supported 00:09:46.114 Volatile Write Cache: Present 00:09:46.114 Atomic Write Unit (Normal): 1 00:09:46.114 Atomic Write Unit (PFail): 1 00:09:46.114 Atomic Compare & Write Unit: 1 00:09:46.114 Fused Compare & Write: Not Supported 00:09:46.114 Scatter-Gather List 00:09:46.114 SGL Command Set: Supported 00:09:46.114 SGL Keyed: Not Supported 00:09:46.114 SGL Bit Bucket Descriptor: Not Supported 00:09:46.114 SGL Metadata Pointer: Not Supported 00:09:46.114 Oversized SGL: Not Supported 00:09:46.114 SGL Metadata Address: Not Supported 00:09:46.114 SGL Offset: Not Supported 00:09:46.114 Transport SGL Data Block: Not Supported 00:09:46.114 Replay Protected Memory Block: Not Supported 00:09:46.114 00:09:46.114 Firmware Slot Information 00:09:46.114 ========================= 00:09:46.114 Active slot: 1 00:09:46.114 Slot 1 Firmware Revision: 1.0 00:09:46.114 00:09:46.114 00:09:46.114 Commands Supported and Effects 00:09:46.114 ============================== 00:09:46.114 Admin Commands 00:09:46.114 -------------- 00:09:46.114 Delete I/O Submission Queue (00h): Supported 00:09:46.114 Create I/O Submission Queue (01h): Supported 00:09:46.114 Get Log Page (02h): Supported 00:09:46.114 Delete I/O Completion Queue (04h): Supported 00:09:46.114 Create I/O Completion Queue (05h): Supported 00:09:46.114 Identify (06h): Supported 00:09:46.114 Abort (08h): Supported 00:09:46.114 Set Features (09h): Supported 00:09:46.114 Get Features (0Ah): Supported 00:09:46.114 Asynchronous Event Request (0Ch): Supported 00:09:46.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.114 Directive Send (19h): Supported 00:09:46.114 Directive Receive (1Ah): Supported 00:09:46.114 Virtualization Management (1Ch): Supported 00:09:46.114 Doorbell Buffer Config (7Ch): Supported 00:09:46.114 Format NVM (80h): Supported LBA-Change 00:09:46.114 I/O Commands 00:09:46.114 ------------ 00:09:46.114 Flush (00h): Supported LBA-Change 00:09:46.114 Write (01h): Supported LBA-Change 00:09:46.114 Read (02h): Supported 00:09:46.114 Compare (05h): Supported 00:09:46.114 Write Zeroes (08h): Supported LBA-Change 00:09:46.114 Dataset Management (09h): Supported LBA-Change 00:09:46.114 Unknown (0Ch): Supported 00:09:46.114 Unknown (12h): Supported 00:09:46.114 Copy (19h): Supported LBA-Change 00:09:46.114 Unknown (1Dh): Supported LBA-Change 00:09:46.114 00:09:46.114 Error Log 00:09:46.114 ========= 00:09:46.114 00:09:46.114 Arbitration 00:09:46.114 =========== 00:09:46.114 Arbitration Burst: no limit 00:09:46.114 00:09:46.114 Power Management 00:09:46.114 ================ 00:09:46.114 Number of Power States: 1 00:09:46.114 Current Power State: Power State #0 00:09:46.114 Power State #0: 00:09:46.114 Max Power: 25.00 W 00:09:46.114 Non-Operational State: Operational 00:09:46.114 Entry Latency: 16 microseconds 00:09:46.114 Exit Latency: 4 microseconds 00:09:46.114 Relative Read Throughput: 0 00:09:46.114 Relative Read Latency: 0 00:09:46.114 Relative Write Throughput: 0 00:09:46.114 Relative Write Latency: 0 00:09:46.114 Idle Power[2024-11-27 11:53:35.904273] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64023 terminated unexpected 00:09:46.114 : Not Reported 00:09:46.114 Active Power: Not Reported 00:09:46.114 Non-Operational Permissive Mode: Not Supported 00:09:46.114 00:09:46.114 Health Information 00:09:46.114 ================== 00:09:46.114 Critical Warnings: 00:09:46.114 Available Spare Space: OK 00:09:46.114 Temperature: OK 00:09:46.114 Device Reliability: OK 00:09:46.114 Read Only: No 00:09:46.114 Volatile Memory Backup: OK 00:09:46.114 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.114 Available Spare: 0% 00:09:46.114 Available Spare Threshold: 0% 00:09:46.114 Life Percentage Used: 0% 00:09:46.114 Data Units Read: 760 00:09:46.114 Data Units Written: 688 00:09:46.114 Host Read Commands: 33977 00:09:46.114 Host Write Commands: 33763 00:09:46.114 Controller Busy Time: 0 minutes 00:09:46.114 Power Cycles: 0 00:09:46.114 Power On Hours: 0 hours 00:09:46.114 Unsafe Shutdowns: 0 00:09:46.114 Unrecoverable Media Errors: 0 00:09:46.114 Lifetime Error Log Entries: 0 00:09:46.114 Warning Temperature Time: 0 minutes 00:09:46.114 Critical Temperature Time: 0 minutes 00:09:46.114 00:09:46.114 Number of Queues 00:09:46.114 ================ 00:09:46.114 Number of I/O Submission Queues: 64 00:09:46.114 Number of I/O Completion Queues: 64 00:09:46.114 00:09:46.114 ZNS Specific Controller Data 00:09:46.114 ============================ 00:09:46.114 Zone Append Size Limit: 0 00:09:46.114 00:09:46.114 00:09:46.114 Active Namespaces 00:09:46.114 ================= 00:09:46.114 Namespace ID:1 00:09:46.114 Error Recovery Timeout: Unlimited 00:09:46.114 Command Set Identifier: NVM (00h) 00:09:46.114 Deallocate: Supported 00:09:46.114 Deallocated/Unwritten Error: Supported 00:09:46.114 Deallocated Read Value: All 0x00 00:09:46.114 Deallocate in Write Zeroes: Not Supported 00:09:46.114 Deallocated Guard Field: 0xFFFF 00:09:46.114 Flush: Supported 00:09:46.114 Reservation: Not Supported 00:09:46.114 Metadata Transferred as: Separate Metadata Buffer 00:09:46.114 Namespace Sharing Capabilities: Private 00:09:46.114 Size (in LBAs): 1548666 (5GiB) 00:09:46.114 Capacity (in LBAs): 1548666 (5GiB) 00:09:46.114 Utilization (in LBAs): 1548666 (5GiB) 00:09:46.114 Thin Provisioning: Not Supported 00:09:46.114 Per-NS Atomic Units: No 00:09:46.114 Maximum Single Source Range Length: 128 00:09:46.114 Maximum Copy Length: 128 00:09:46.114 Maximum Source Range Count: 128 00:09:46.114 NGUID/EUI64 Never Reused: No 00:09:46.114 Namespace Write Protected: No 00:09:46.114 Number of LBA Formats: 8 00:09:46.114 Current LBA Format: LBA Format #07 00:09:46.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.114 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.114 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.114 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.114 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.114 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.114 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.114 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.114 00:09:46.114 NVM Specific Namespace Data 00:09:46.115 =========================== 00:09:46.115 Logical Block Storage Tag Mask: 0 00:09:46.115 Protection Information Capabilities: 00:09:46.115 16b Guard Protection Information Storage Tag Support: No 00:09:46.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.115 Storage Tag Check Read Support: No 00:09:46.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.115 ===================================================== 00:09:46.115 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:46.115 ===================================================== 00:09:46.115 Controller Capabilities/Features 00:09:46.115 ================================ 00:09:46.115 Vendor ID: 1b36 00:09:46.115 Subsystem Vendor ID: 1af4 00:09:46.115 Serial Number: 12341 00:09:46.115 Model Number: QEMU NVMe Ctrl 00:09:46.115 Firmware Version: 8.0.0 00:09:46.115 Recommended Arb Burst: 6 00:09:46.115 IEEE OUI Identifier: 00 54 52 00:09:46.115 Multi-path I/O 00:09:46.115 May have multiple subsystem ports: No 00:09:46.115 May have multiple controllers: No 00:09:46.115 Associated with SR-IOV VF: No 00:09:46.115 Max Data Transfer Size: 524288 00:09:46.115 Max Number of Namespaces: 256 00:09:46.115 Max Number of I/O Queues: 64 00:09:46.115 NVMe Specification Version (VS): 1.4 00:09:46.115 NVMe Specification Version (Identify): 1.4 00:09:46.115 Maximum Queue Entries: 2048 00:09:46.115 Contiguous Queues Required: Yes 00:09:46.115 Arbitration Mechanisms Supported 00:09:46.115 Weighted Round Robin: Not Supported 00:09:46.115 Vendor Specific: Not Supported 00:09:46.115 Reset Timeout: 7500 ms 00:09:46.115 Doorbell Stride: 4 bytes 00:09:46.115 NVM Subsystem Reset: Not Supported 00:09:46.115 Command Sets Supported 00:09:46.115 NVM Command Set: Supported 00:09:46.115 Boot Partition: Not Supported 00:09:46.115 Memory Page Size Minimum: 4096 bytes 00:09:46.115 Memory Page Size Maximum: 65536 bytes 00:09:46.115 Persistent Memory Region: Not Supported 00:09:46.115 Optional Asynchronous Events Supported 00:09:46.115 Namespace Attribute Notices: Supported 00:09:46.115 Firmware Activation Notices: Not Supported 00:09:46.115 ANA Change Notices: Not Supported 00:09:46.115 PLE Aggregate Log Change Notices: Not Supported 00:09:46.115 LBA Status Info Alert Notices: Not Supported 00:09:46.115 EGE Aggregate Log Change Notices: Not Supported 00:09:46.115 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.115 Zone Descriptor Change Notices: Not Supported 00:09:46.115 Discovery Log Change Notices: Not Supported 00:09:46.115 Controller Attributes 00:09:46.115 128-bit Host Identifier: Not Supported 00:09:46.115 Non-Operational Permissive Mode: Not Supported 00:09:46.115 NVM Sets: Not Supported 00:09:46.115 Read Recovery Levels: Not Supported 00:09:46.115 Endurance Groups: Not Supported 00:09:46.115 Predictable Latency Mode: Not Supported 00:09:46.115 Traffic Based Keep ALive: Not Supported 00:09:46.115 Namespace Granularity: Not Supported 00:09:46.115 SQ Associations: Not Supported 00:09:46.115 UUID List: Not Supported 00:09:46.115 Multi-Domain Subsystem: Not Supported 00:09:46.115 Fixed Capacity Management: Not Supported 00:09:46.115 Variable Capacity Management: Not Supported 00:09:46.115 Delete Endurance Group: Not Supported 00:09:46.115 Delete NVM Set: Not Supported 00:09:46.115 Extended LBA Formats Supported: Supported 00:09:46.115 Flexible Data Placement Supported: Not Supported 00:09:46.115 00:09:46.115 Controller Memory Buffer Support 00:09:46.115 ================================ 00:09:46.115 Supported: No 00:09:46.115 00:09:46.115 Persistent Memory Region Support 00:09:46.115 ================================ 00:09:46.115 Supported: No 00:09:46.115 00:09:46.115 Admin Command Set Attributes 00:09:46.115 ============================ 00:09:46.115 Security Send/Receive: Not Supported 00:09:46.115 Format NVM: Supported 00:09:46.115 Firmware Activate/Download: Not Supported 00:09:46.115 Namespace Management: Supported 00:09:46.115 Device Self-Test: Not Supported 00:09:46.115 Directives: Supported 00:09:46.115 NVMe-MI: Not Supported 00:09:46.115 Virtualization Management: Not Supported 00:09:46.115 Doorbell Buffer Config: Supported 00:09:46.115 Get LBA Status Capability: Not Supported 00:09:46.115 Command & Feature Lockdown Capability: Not Supported 00:09:46.115 Abort Command Limit: 4 00:09:46.115 Async Event Request Limit: 4 00:09:46.115 Number of Firmware Slots: N/A 00:09:46.115 Firmware Slot 1 Read-Only: N/A 00:09:46.115 Firmware Activation Without Reset: N/A 00:09:46.115 Multiple Update Detection Support: N/A 00:09:46.115 Firmware Update Granularity: No Information Provided 00:09:46.115 Per-Namespace SMART Log: Yes 00:09:46.115 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.115 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:46.115 Command Effects Log Page: Supported 00:09:46.115 Get Log Page Extended Data: Supported 00:09:46.115 Telemetry Log Pages: Not Supported 00:09:46.115 Persistent Event Log Pages: Not Supported 00:09:46.115 Supported Log Pages Log Page: May Support 00:09:46.115 Commands Supported & Effects Log Page: Not Supported 00:09:46.115 Feature Identifiers & Effects Log Page:May Support 00:09:46.115 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.115 Data Area 4 for Telemetry Log: Not Supported 00:09:46.115 Error Log Page Entries Supported: 1 00:09:46.115 Keep Alive: Not Supported 00:09:46.115 00:09:46.115 NVM Command Set Attributes 00:09:46.115 ========================== 00:09:46.115 Submission Queue Entry Size 00:09:46.115 Max: 64 00:09:46.115 Min: 64 00:09:46.115 Completion Queue Entry Size 00:09:46.115 Max: 16 00:09:46.115 Min: 16 00:09:46.115 Number of Namespaces: 256 00:09:46.115 Compare Command: Supported 00:09:46.115 Write Uncorrectable Command: Not Supported 00:09:46.116 Dataset Management Command: Supported 00:09:46.116 Write Zeroes Command: Supported 00:09:46.116 Set Features Save Field: Supported 00:09:46.116 Reservations: Not Supported 00:09:46.116 Timestamp: Supported 00:09:46.116 Copy: Supported 00:09:46.116 Volatile Write Cache: Present 00:09:46.116 Atomic Write Unit (Normal): 1 00:09:46.116 Atomic Write Unit (PFail): 1 00:09:46.116 Atomic Compare & Write Unit: 1 00:09:46.116 Fused Compare & Write: Not Supported 00:09:46.116 Scatter-Gather List 00:09:46.116 SGL Command Set: Supported 00:09:46.116 SGL Keyed: Not Supported 00:09:46.116 SGL Bit Bucket Descriptor: Not Supported 00:09:46.116 SGL Metadata Pointer: Not Supported 00:09:46.116 Oversized SGL: Not Supported 00:09:46.116 SGL Metadata Address: Not Supported 00:09:46.116 SGL Offset: Not Supported 00:09:46.116 Transport SGL Data Block: Not Supported 00:09:46.116 Replay Protected Memory Block: Not Supported 00:09:46.116 00:09:46.116 Firmware Slot Information 00:09:46.116 ========================= 00:09:46.116 Active slot: 1 00:09:46.116 Slot 1 Firmware Revision: 1.0 00:09:46.116 00:09:46.116 00:09:46.116 Commands Supported and Effects 00:09:46.116 ============================== 00:09:46.116 Admin Commands 00:09:46.116 -------------- 00:09:46.116 Delete I/O Submission Queue (00h): Supported 00:09:46.116 Create I/O Submission Queue (01h): Supported 00:09:46.116 Get Log Page (02h): Supported 00:09:46.116 Delete I/O Completion Queue (04h): Supported 00:09:46.116 Create I/O Completion Queue (05h): Supported 00:09:46.116 Identify (06h): Supported 00:09:46.116 Abort (08h): Supported 00:09:46.116 Set Features (09h): Supported 00:09:46.116 Get Features (0Ah): Supported 00:09:46.116 Asynchronous Event Request (0Ch): Supported 00:09:46.116 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.116 Directive Send (19h): Supported 00:09:46.116 Directive Receive (1Ah): Supported 00:09:46.116 Virtualization Management (1Ch): Supported 00:09:46.116 Doorbell Buffer Config (7Ch): Supported 00:09:46.116 Format NVM (80h): Supported LBA-Change 00:09:46.116 I/O Commands 00:09:46.116 ------------ 00:09:46.116 Flush (00h): Supported LBA-Change 00:09:46.116 Write (01h): Supported LBA-Change 00:09:46.116 Read (02h): Supported 00:09:46.116 Compare (05h): Supported 00:09:46.116 Write Zeroes (08h): Supported LBA-Change 00:09:46.116 Dataset Management (09h): Supported LBA-Change 00:09:46.116 Unknown (0Ch): Supported 00:09:46.116 Unknown (12h): Supported 00:09:46.116 Copy (19h): Supported LBA-Change 00:09:46.116 Unknown (1Dh): Supported LBA-Change 00:09:46.116 00:09:46.116 Error Log 00:09:46.116 ========= 00:09:46.116 00:09:46.116 Arbitration 00:09:46.116 =========== 00:09:46.116 Arbitration Burst: no limit 00:09:46.116 00:09:46.116 Power Management 00:09:46.116 ================ 00:09:46.116 Number of Power States: 1 00:09:46.116 Current Power State: Power State #0 00:09:46.116 Power State #0: 00:09:46.116 Max Power: 25.00 W 00:09:46.116 Non-Operational State: Operational 00:09:46.116 Entry Latency: 16 microseconds 00:09:46.116 Exit Latency: 4 microseconds 00:09:46.116 Relative Read Throughput: 0 00:09:46.116 Relative Read Latency: 0 00:09:46.116 Relative Write Throughput: 0 00:09:46.116 Relative Write Latency: 0 00:09:46.116 Idle Power: Not Reported 00:09:46.116 Active Power: Not Reported 00:09:46.116 Non-Operational Permissive Mode: Not Supported 00:09:46.116 00:09:46.116 Health Information 00:09:46.116 ================== 00:09:46.116 Critical Warnings: 00:09:46.116 Available Spare Space: OK 00:09:46.116 Temperature: [2024-11-27 11:53:35.905254] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64023 terminated unexpected 00:09:46.116 OK 00:09:46.116 Device Reliability: OK 00:09:46.116 Read Only: No 00:09:46.116 Volatile Memory Backup: OK 00:09:46.116 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.116 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.116 Available Spare: 0% 00:09:46.116 Available Spare Threshold: 0% 00:09:46.116 Life Percentage Used: 0% 00:09:46.116 Data Units Read: 1157 00:09:46.116 Data Units Written: 1022 00:09:46.116 Host Read Commands: 48962 00:09:46.116 Host Write Commands: 47711 00:09:46.116 Controller Busy Time: 0 minutes 00:09:46.116 Power Cycles: 0 00:09:46.116 Power On Hours: 0 hours 00:09:46.116 Unsafe Shutdowns: 0 00:09:46.116 Unrecoverable Media Errors: 0 00:09:46.116 Lifetime Error Log Entries: 0 00:09:46.116 Warning Temperature Time: 0 minutes 00:09:46.116 Critical Temperature Time: 0 minutes 00:09:46.116 00:09:46.116 Number of Queues 00:09:46.116 ================ 00:09:46.116 Number of I/O Submission Queues: 64 00:09:46.116 Number of I/O Completion Queues: 64 00:09:46.116 00:09:46.116 ZNS Specific Controller Data 00:09:46.116 ============================ 00:09:46.116 Zone Append Size Limit: 0 00:09:46.116 00:09:46.116 00:09:46.116 Active Namespaces 00:09:46.116 ================= 00:09:46.116 Namespace ID:1 00:09:46.116 Error Recovery Timeout: Unlimited 00:09:46.116 Command Set Identifier: NVM (00h) 00:09:46.116 Deallocate: Supported 00:09:46.116 Deallocated/Unwritten Error: Supported 00:09:46.116 Deallocated Read Value: All 0x00 00:09:46.116 Deallocate in Write Zeroes: Not Supported 00:09:46.116 Deallocated Guard Field: 0xFFFF 00:09:46.116 Flush: Supported 00:09:46.116 Reservation: Not Supported 00:09:46.116 Namespace Sharing Capabilities: Private 00:09:46.116 Size (in LBAs): 1310720 (5GiB) 00:09:46.116 Capacity (in LBAs): 1310720 (5GiB) 00:09:46.116 Utilization (in LBAs): 1310720 (5GiB) 00:09:46.116 Thin Provisioning: Not Supported 00:09:46.116 Per-NS Atomic Units: No 00:09:46.116 Maximum Single Source Range Length: 128 00:09:46.116 Maximum Copy Length: 128 00:09:46.116 Maximum Source Range Count: 128 00:09:46.116 NGUID/EUI64 Never Reused: No 00:09:46.116 Namespace Write Protected: No 00:09:46.116 Number of LBA Formats: 8 00:09:46.116 Current LBA Format: LBA Format #04 00:09:46.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.116 00:09:46.116 NVM Specific Namespace Data 00:09:46.116 =========================== 00:09:46.116 Logical Block Storage Tag Mask: 0 00:09:46.116 Protection Information Capabilities: 00:09:46.117 16b Guard Protection Information Storage Tag Support: No 00:09:46.117 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.117 Storage Tag Check Read Support: No 00:09:46.117 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.117 ===================================================== 00:09:46.117 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:46.117 ===================================================== 00:09:46.117 Controller Capabilities/Features 00:09:46.117 ================================ 00:09:46.117 Vendor ID: 1b36 00:09:46.117 Subsystem Vendor ID: 1af4 00:09:46.117 Serial Number: 12343 00:09:46.117 Model Number: QEMU NVMe Ctrl 00:09:46.117 Firmware Version: 8.0.0 00:09:46.117 Recommended Arb Burst: 6 00:09:46.117 IEEE OUI Identifier: 00 54 52 00:09:46.117 Multi-path I/O 00:09:46.117 May have multiple subsystem ports: No 00:09:46.117 May have multiple controllers: Yes 00:09:46.117 Associated with SR-IOV VF: No 00:09:46.117 Max Data Transfer Size: 524288 00:09:46.117 Max Number of Namespaces: 256 00:09:46.117 Max Number of I/O Queues: 64 00:09:46.117 NVMe Specification Version (VS): 1.4 00:09:46.117 NVMe Specification Version (Identify): 1.4 00:09:46.117 Maximum Queue Entries: 2048 00:09:46.117 Contiguous Queues Required: Yes 00:09:46.117 Arbitration Mechanisms Supported 00:09:46.117 Weighted Round Robin: Not Supported 00:09:46.117 Vendor Specific: Not Supported 00:09:46.117 Reset Timeout: 7500 ms 00:09:46.117 Doorbell Stride: 4 bytes 00:09:46.117 NVM Subsystem Reset: Not Supported 00:09:46.117 Command Sets Supported 00:09:46.117 NVM Command Set: Supported 00:09:46.117 Boot Partition: Not Supported 00:09:46.117 Memory Page Size Minimum: 4096 bytes 00:09:46.117 Memory Page Size Maximum: 65536 bytes 00:09:46.117 Persistent Memory Region: Not Supported 00:09:46.117 Optional Asynchronous Events Supported 00:09:46.117 Namespace Attribute Notices: Supported 00:09:46.117 Firmware Activation Notices: Not Supported 00:09:46.117 ANA Change Notices: Not Supported 00:09:46.117 PLE Aggregate Log Change Notices: Not Supported 00:09:46.117 LBA Status Info Alert Notices: Not Supported 00:09:46.117 EGE Aggregate Log Change Notices: Not Supported 00:09:46.117 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.117 Zone Descriptor Change Notices: Not Supported 00:09:46.117 Discovery Log Change Notices: Not Supported 00:09:46.117 Controller Attributes 00:09:46.117 128-bit Host Identifier: Not Supported 00:09:46.117 Non-Operational Permissive Mode: Not Supported 00:09:46.117 NVM Sets: Not Supported 00:09:46.117 Read Recovery Levels: Not Supported 00:09:46.117 Endurance Groups: Supported 00:09:46.117 Predictable Latency Mode: Not Supported 00:09:46.117 Traffic Based Keep ALive: Not Supported 00:09:46.117 Namespace Granularity: Not Supported 00:09:46.117 SQ Associations: Not Supported 00:09:46.117 UUID List: Not Supported 00:09:46.117 Multi-Domain Subsystem: Not Supported 00:09:46.117 Fixed Capacity Management: Not Supported 00:09:46.117 Variable Capacity Management: Not Supported 00:09:46.117 Delete Endurance Group: Not Supported 00:09:46.117 Delete NVM Set: Not Supported 00:09:46.117 Extended LBA Formats Supported: Supported 00:09:46.117 Flexible Data Placement Supported: Supported 00:09:46.117 00:09:46.117 Controller Memory Buffer Support 00:09:46.117 ================================ 00:09:46.117 Supported: No 00:09:46.117 00:09:46.117 Persistent Memory Region Support 00:09:46.117 ================================ 00:09:46.117 Supported: No 00:09:46.117 00:09:46.117 Admin Command Set Attributes 00:09:46.117 ============================ 00:09:46.117 Security Send/Receive: Not Supported 00:09:46.117 Format NVM: Supported 00:09:46.117 Firmware Activate/Download: Not Supported 00:09:46.117 Namespace Management: Supported 00:09:46.117 Device Self-Test: Not Supported 00:09:46.117 Directives: Supported 00:09:46.117 NVMe-MI: Not Supported 00:09:46.117 Virtualization Management: Not Supported 00:09:46.117 Doorbell Buffer Config: Supported 00:09:46.117 Get LBA Status Capability: Not Supported 00:09:46.117 Command & Feature Lockdown Capability: Not Supported 00:09:46.117 Abort Command Limit: 4 00:09:46.117 Async Event Request Limit: 4 00:09:46.117 Number of Firmware Slots: N/A 00:09:46.117 Firmware Slot 1 Read-Only: N/A 00:09:46.117 Firmware Activation Without Reset: N/A 00:09:46.117 Multiple Update Detection Support: N/A 00:09:46.117 Firmware Update Granularity: No Information Provided 00:09:46.117 Per-Namespace SMART Log: Yes 00:09:46.117 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.117 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:46.117 Command Effects Log Page: Supported 00:09:46.117 Get Log Page Extended Data: Supported 00:09:46.117 Telemetry Log Pages: Not Supported 00:09:46.117 Persistent Event Log Pages: Not Supported 00:09:46.117 Supported Log Pages Log Page: May Support 00:09:46.117 Commands Supported & Effects Log Page: Not Supported 00:09:46.117 Feature Identifiers & Effects Log Page:May Support 00:09:46.117 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.117 Data Area 4 for Telemetry Log: Not Supported 00:09:46.117 Error Log Page Entries Supported: 1 00:09:46.117 Keep Alive: Not Supported 00:09:46.117 00:09:46.117 NVM Command Set Attributes 00:09:46.117 ========================== 00:09:46.117 Submission Queue Entry Size 00:09:46.117 Max: 64 00:09:46.117 Min: 64 00:09:46.117 Completion Queue Entry Size 00:09:46.117 Max: 16 00:09:46.117 Min: 16 00:09:46.117 Number of Namespaces: 256 00:09:46.117 Compare Command: Supported 00:09:46.117 Write Uncorrectable Command: Not Supported 00:09:46.117 Dataset Management Command: Supported 00:09:46.117 Write Zeroes Command: Supported 00:09:46.117 Set Features Save Field: Supported 00:09:46.117 Reservations: Not Supported 00:09:46.117 Timestamp: Supported 00:09:46.117 Copy: Supported 00:09:46.117 Volatile Write Cache: Present 00:09:46.117 Atomic Write Unit (Normal): 1 00:09:46.117 Atomic Write Unit (PFail): 1 00:09:46.117 Atomic Compare & Write Unit: 1 00:09:46.117 Fused Compare & Write: Not Supported 00:09:46.117 Scatter-Gather List 00:09:46.117 SGL Command Set: Supported 00:09:46.118 SGL Keyed: Not Supported 00:09:46.118 SGL Bit Bucket Descriptor: Not Supported 00:09:46.118 SGL Metadata Pointer: Not Supported 00:09:46.118 Oversized SGL: Not Supported 00:09:46.118 SGL Metadata Address: Not Supported 00:09:46.118 SGL Offset: Not Supported 00:09:46.118 Transport SGL Data Block: Not Supported 00:09:46.118 Replay Protected Memory Block: Not Supported 00:09:46.118 00:09:46.118 Firmware Slot Information 00:09:46.118 ========================= 00:09:46.118 Active slot: 1 00:09:46.118 Slot 1 Firmware Revision: 1.0 00:09:46.118 00:09:46.118 00:09:46.118 Commands Supported and Effects 00:09:46.118 ============================== 00:09:46.118 Admin Commands 00:09:46.118 -------------- 00:09:46.118 Delete I/O Submission Queue (00h): Supported 00:09:46.118 Create I/O Submission Queue (01h): Supported 00:09:46.118 Get Log Page (02h): Supported 00:09:46.118 Delete I/O Completion Queue (04h): Supported 00:09:46.118 Create I/O Completion Queue (05h): Supported 00:09:46.118 Identify (06h): Supported 00:09:46.118 Abort (08h): Supported 00:09:46.118 Set Features (09h): Supported 00:09:46.118 Get Features (0Ah): Supported 00:09:46.118 Asynchronous Event Request (0Ch): Supported 00:09:46.118 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.118 Directive Send (19h): Supported 00:09:46.118 Directive Receive (1Ah): Supported 00:09:46.118 Virtualization Management (1Ch): Supported 00:09:46.118 Doorbell Buffer Config (7Ch): Supported 00:09:46.118 Format NVM (80h): Supported LBA-Change 00:09:46.118 I/O Commands 00:09:46.118 ------------ 00:09:46.118 Flush (00h): Supported LBA-Change 00:09:46.118 Write (01h): Supported LBA-Change 00:09:46.118 Read (02h): Supported 00:09:46.118 Compare (05h): Supported 00:09:46.118 Write Zeroes (08h): Supported LBA-Change 00:09:46.118 Dataset Management (09h): Supported LBA-Change 00:09:46.118 Unknown (0Ch): Supported 00:09:46.118 Unknown (12h): Supported 00:09:46.118 Copy (19h): Supported LBA-Change 00:09:46.118 Unknown (1Dh): Supported LBA-Change 00:09:46.118 00:09:46.118 Error Log 00:09:46.118 ========= 00:09:46.118 00:09:46.118 Arbitration 00:09:46.118 =========== 00:09:46.118 Arbitration Burst: no limit 00:09:46.118 00:09:46.118 Power Management 00:09:46.118 ================ 00:09:46.118 Number of Power States: 1 00:09:46.118 Current Power State: Power State #0 00:09:46.118 Power State #0: 00:09:46.118 Max Power: 25.00 W 00:09:46.118 Non-Operational State: Operational 00:09:46.118 Entry Latency: 16 microseconds 00:09:46.118 Exit Latency: 4 microseconds 00:09:46.118 Relative Read Throughput: 0 00:09:46.118 Relative Read Latency: 0 00:09:46.118 Relative Write Throughput: 0 00:09:46.118 Relative Write Latency: 0 00:09:46.118 Idle Power: Not Reported 00:09:46.118 Active Power: Not Reported 00:09:46.118 Non-Operational Permissive Mode: Not Supported 00:09:46.118 00:09:46.118 Health Information 00:09:46.118 ================== 00:09:46.118 Critical Warnings: 00:09:46.118 Available Spare Space: OK 00:09:46.118 Temperature: OK 00:09:46.118 Device Reliability: OK 00:09:46.118 Read Only: No 00:09:46.118 Volatile Memory Backup: OK 00:09:46.118 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.118 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.118 Available Spare: 0% 00:09:46.118 Available Spare Threshold: 0% 00:09:46.118 Life Percentage Used: 0% 00:09:46.118 Data Units Read: 978 00:09:46.118 Data Units Written: 907 00:09:46.118 Host Read Commands: 36099 00:09:46.118 Host Write Commands: 35522 00:09:46.118 Controller Busy Time: 0 minutes 00:09:46.118 Power Cycles: 0 00:09:46.118 Power On Hours: 0 hours 00:09:46.118 Unsafe Shutdowns: 0 00:09:46.118 Unrecoverable Media Errors: 0 00:09:46.118 Lifetime Error Log Entries: 0 00:09:46.118 Warning Temperature Time: 0 minutes 00:09:46.118 Critical Temperature Time: 0 minutes 00:09:46.118 00:09:46.118 Number of Queues 00:09:46.118 ================ 00:09:46.118 Number of I/O Submission Queues: 64 00:09:46.118 Number of I/O Completion Queues: 64 00:09:46.118 00:09:46.118 ZNS Specific Controller Data 00:09:46.118 ============================ 00:09:46.118 Zone Append Size Limit: 0 00:09:46.118 00:09:46.118 00:09:46.118 Active Namespaces 00:09:46.118 ================= 00:09:46.118 Namespace ID:1 00:09:46.118 Error Recovery Timeout: Unlimited 00:09:46.118 Command Set Identifier: NVM (00h) 00:09:46.118 Deallocate: Supported 00:09:46.118 Deallocated/Unwritten Error: Supported 00:09:46.118 Deallocated Read Value: All 0x00 00:09:46.118 Deallocate in Write Zeroes: Not Supported 00:09:46.118 Deallocated Guard Field: 0xFFFF 00:09:46.118 Flush: Supported 00:09:46.118 Reservation: Not Supported 00:09:46.118 Namespace Sharing Capabilities: Multiple Controllers 00:09:46.118 Size (in LBAs): 262144 (1GiB) 00:09:46.118 Capacity (in LBAs): 262144 (1GiB) 00:09:46.118 Utilization (in LBAs): 262144 (1GiB) 00:09:46.118 Thin Provisioning: Not Supported 00:09:46.118 Per-NS Atomic Units: No 00:09:46.118 Maximum Single Source Range Length: 128 00:09:46.118 Maximum Copy Length: 128 00:09:46.118 Maximum Source Range Count: 128 00:09:46.118 NGUID/EUI64 Never Reused: No 00:09:46.118 Namespace Write Protected: No 00:09:46.118 Endurance group ID: 1 00:09:46.118 Number of LBA Formats: 8 00:09:46.118 Current LBA Format: LBA Format #04 00:09:46.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.118 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.118 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.118 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.118 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.118 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.118 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.118 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.118 00:09:46.118 Get Feature FDP: 00:09:46.118 ================ 00:09:46.118 Enabled: Yes 00:09:46.118 FDP configuration index: 0 00:09:46.118 00:09:46.118 FDP configurations log page 00:09:46.118 =========================== 00:09:46.118 Number of FDP configurations: 1 00:09:46.118 Version: 0 00:09:46.118 Size: 112 00:09:46.118 FDP Configuration Descriptor: 0 00:09:46.118 Descriptor Size: 96 00:09:46.118 Reclaim Group Identifier format: 2 00:09:46.118 FDP Volatile Write Cache: Not Present 00:09:46.118 FDP Configuration: Valid 00:09:46.118 Vendor Specific Size: 0 00:09:46.118 Number of Reclaim Groups: 2 00:09:46.118 Number of Recalim Unit Handles: 8 00:09:46.118 Max Placement Identifiers: 128 00:09:46.118 Number of Namespaces Suppprted: 256 00:09:46.118 Reclaim unit Nominal Size: 6000000 bytes 00:09:46.118 Estimated Reclaim Unit Time Limit: Not Reported 00:09:46.118 RUH Desc #000: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #001: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #002: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #003: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #004: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #005: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #006: RUH Type: Initially Isolated 00:09:46.119 RUH Desc #007: RUH Type: Initially Isolated 00:09:46.119 00:09:46.119 FDP reclaim unit handle usage log page 00:09:46.119 ====================================== 00:09:46.119 Number of Reclaim Unit Handles: 8 00:09:46.119 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:46.119 RUH Usage Desc #001: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #002: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #003: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #004: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #005: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #006: RUH Attributes: Unused 00:09:46.119 RUH Usage Desc #007: RUH Attributes: Unused 00:09:46.119 00:09:46.119 FDP statistics log page 00:09:46.119 ======================= 00:09:46.119 Host bytes with metadata written: 574070784 00:09:46.119 Med[2024-11-27 11:53:35.906867] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64023 terminated unexpected 00:09:46.119 ia bytes with metadata written: 574148608 00:09:46.119 Media bytes erased: 0 00:09:46.119 00:09:46.119 FDP events log page 00:09:46.119 =================== 00:09:46.119 Number of FDP events: 0 00:09:46.119 00:09:46.119 NVM Specific Namespace Data 00:09:46.119 =========================== 00:09:46.119 Logical Block Storage Tag Mask: 0 00:09:46.119 Protection Information Capabilities: 00:09:46.119 16b Guard Protection Information Storage Tag Support: No 00:09:46.119 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.119 Storage Tag Check Read Support: No 00:09:46.119 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.119 ===================================================== 00:09:46.119 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:46.119 ===================================================== 00:09:46.119 Controller Capabilities/Features 00:09:46.119 ================================ 00:09:46.119 Vendor ID: 1b36 00:09:46.119 Subsystem Vendor ID: 1af4 00:09:46.119 Serial Number: 12342 00:09:46.119 Model Number: QEMU NVMe Ctrl 00:09:46.119 Firmware Version: 8.0.0 00:09:46.119 Recommended Arb Burst: 6 00:09:46.119 IEEE OUI Identifier: 00 54 52 00:09:46.119 Multi-path I/O 00:09:46.119 May have multiple subsystem ports: No 00:09:46.119 May have multiple controllers: No 00:09:46.119 Associated with SR-IOV VF: No 00:09:46.119 Max Data Transfer Size: 524288 00:09:46.119 Max Number of Namespaces: 256 00:09:46.119 Max Number of I/O Queues: 64 00:09:46.119 NVMe Specification Version (VS): 1.4 00:09:46.119 NVMe Specification Version (Identify): 1.4 00:09:46.119 Maximum Queue Entries: 2048 00:09:46.119 Contiguous Queues Required: Yes 00:09:46.119 Arbitration Mechanisms Supported 00:09:46.119 Weighted Round Robin: Not Supported 00:09:46.119 Vendor Specific: Not Supported 00:09:46.119 Reset Timeout: 7500 ms 00:09:46.119 Doorbell Stride: 4 bytes 00:09:46.119 NVM Subsystem Reset: Not Supported 00:09:46.119 Command Sets Supported 00:09:46.119 NVM Command Set: Supported 00:09:46.119 Boot Partition: Not Supported 00:09:46.119 Memory Page Size Minimum: 4096 bytes 00:09:46.119 Memory Page Size Maximum: 65536 bytes 00:09:46.119 Persistent Memory Region: Not Supported 00:09:46.119 Optional Asynchronous Events Supported 00:09:46.119 Namespace Attribute Notices: Supported 00:09:46.119 Firmware Activation Notices: Not Supported 00:09:46.119 ANA Change Notices: Not Supported 00:09:46.119 PLE Aggregate Log Change Notices: Not Supported 00:09:46.119 LBA Status Info Alert Notices: Not Supported 00:09:46.119 EGE Aggregate Log Change Notices: Not Supported 00:09:46.119 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.119 Zone Descriptor Change Notices: Not Supported 00:09:46.119 Discovery Log Change Notices: Not Supported 00:09:46.119 Controller Attributes 00:09:46.119 128-bit Host Identifier: Not Supported 00:09:46.119 Non-Operational Permissive Mode: Not Supported 00:09:46.119 NVM Sets: Not Supported 00:09:46.119 Read Recovery Levels: Not Supported 00:09:46.119 Endurance Groups: Not Supported 00:09:46.119 Predictable Latency Mode: Not Supported 00:09:46.119 Traffic Based Keep ALive: Not Supported 00:09:46.119 Namespace Granularity: Not Supported 00:09:46.119 SQ Associations: Not Supported 00:09:46.119 UUID List: Not Supported 00:09:46.119 Multi-Domain Subsystem: Not Supported 00:09:46.119 Fixed Capacity Management: Not Supported 00:09:46.119 Variable Capacity Management: Not Supported 00:09:46.120 Delete Endurance Group: Not Supported 00:09:46.120 Delete NVM Set: Not Supported 00:09:46.120 Extended LBA Formats Supported: Supported 00:09:46.120 Flexible Data Placement Supported: Not Supported 00:09:46.120 00:09:46.120 Controller Memory Buffer Support 00:09:46.120 ================================ 00:09:46.120 Supported: No 00:09:46.120 00:09:46.120 Persistent Memory Region Support 00:09:46.120 ================================ 00:09:46.120 Supported: No 00:09:46.120 00:09:46.120 Admin Command Set Attributes 00:09:46.120 ============================ 00:09:46.120 Security Send/Receive: Not Supported 00:09:46.120 Format NVM: Supported 00:09:46.120 Firmware Activate/Download: Not Supported 00:09:46.120 Namespace Management: Supported 00:09:46.120 Device Self-Test: Not Supported 00:09:46.120 Directives: Supported 00:09:46.120 NVMe-MI: Not Supported 00:09:46.120 Virtualization Management: Not Supported 00:09:46.120 Doorbell Buffer Config: Supported 00:09:46.120 Get LBA Status Capability: Not Supported 00:09:46.120 Command & Feature Lockdown Capability: Not Supported 00:09:46.120 Abort Command Limit: 4 00:09:46.120 Async Event Request Limit: 4 00:09:46.120 Number of Firmware Slots: N/A 00:09:46.120 Firmware Slot 1 Read-Only: N/A 00:09:46.120 Firmware Activation Without Reset: N/A 00:09:46.120 Multiple Update Detection Support: N/A 00:09:46.120 Firmware Update Granularity: No Information Provided 00:09:46.120 Per-Namespace SMART Log: Yes 00:09:46.120 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.120 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:46.120 Command Effects Log Page: Supported 00:09:46.120 Get Log Page Extended Data: Supported 00:09:46.120 Telemetry Log Pages: Not Supported 00:09:46.120 Persistent Event Log Pages: Not Supported 00:09:46.120 Supported Log Pages Log Page: May Support 00:09:46.120 Commands Supported & Effects Log Page: Not Supported 00:09:46.120 Feature Identifiers & Effects Log Page:May Support 00:09:46.120 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.120 Data Area 4 for Telemetry Log: Not Supported 00:09:46.120 Error Log Page Entries Supported: 1 00:09:46.120 Keep Alive: Not Supported 00:09:46.120 00:09:46.120 NVM Command Set Attributes 00:09:46.120 ========================== 00:09:46.120 Submission Queue Entry Size 00:09:46.120 Max: 64 00:09:46.120 Min: 64 00:09:46.120 Completion Queue Entry Size 00:09:46.120 Max: 16 00:09:46.120 Min: 16 00:09:46.120 Number of Namespaces: 256 00:09:46.120 Compare Command: Supported 00:09:46.120 Write Uncorrectable Command: Not Supported 00:09:46.120 Dataset Management Command: Supported 00:09:46.120 Write Zeroes Command: Supported 00:09:46.120 Set Features Save Field: Supported 00:09:46.120 Reservations: Not Supported 00:09:46.120 Timestamp: Supported 00:09:46.120 Copy: Supported 00:09:46.120 Volatile Write Cache: Present 00:09:46.120 Atomic Write Unit (Normal): 1 00:09:46.120 Atomic Write Unit (PFail): 1 00:09:46.120 Atomic Compare & Write Unit: 1 00:09:46.120 Fused Compare & Write: Not Supported 00:09:46.120 Scatter-Gather List 00:09:46.120 SGL Command Set: Supported 00:09:46.120 SGL Keyed: Not Supported 00:09:46.120 SGL Bit Bucket Descriptor: Not Supported 00:09:46.120 SGL Metadata Pointer: Not Supported 00:09:46.120 Oversized SGL: Not Supported 00:09:46.120 SGL Metadata Address: Not Supported 00:09:46.120 SGL Offset: Not Supported 00:09:46.120 Transport SGL Data Block: Not Supported 00:09:46.120 Replay Protected Memory Block: Not Supported 00:09:46.120 00:09:46.120 Firmware Slot Information 00:09:46.120 ========================= 00:09:46.120 Active slot: 1 00:09:46.120 Slot 1 Firmware Revision: 1.0 00:09:46.120 00:09:46.120 00:09:46.120 Commands Supported and Effects 00:09:46.120 ============================== 00:09:46.120 Admin Commands 00:09:46.120 -------------- 00:09:46.120 Delete I/O Submission Queue (00h): Supported 00:09:46.120 Create I/O Submission Queue (01h): Supported 00:09:46.120 Get Log Page (02h): Supported 00:09:46.120 Delete I/O Completion Queue (04h): Supported 00:09:46.120 Create I/O Completion Queue (05h): Supported 00:09:46.120 Identify (06h): Supported 00:09:46.120 Abort (08h): Supported 00:09:46.120 Set Features (09h): Supported 00:09:46.120 Get Features (0Ah): Supported 00:09:46.120 Asynchronous Event Request (0Ch): Supported 00:09:46.120 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.120 Directive Send (19h): Supported 00:09:46.120 Directive Receive (1Ah): Supported 00:09:46.120 Virtualization Management (1Ch): Supported 00:09:46.120 Doorbell Buffer Config (7Ch): Supported 00:09:46.120 Format NVM (80h): Supported LBA-Change 00:09:46.120 I/O Commands 00:09:46.120 ------------ 00:09:46.120 Flush (00h): Supported LBA-Change 00:09:46.120 Write (01h): Supported LBA-Change 00:09:46.120 Read (02h): Supported 00:09:46.120 Compare (05h): Supported 00:09:46.120 Write Zeroes (08h): Supported LBA-Change 00:09:46.120 Dataset Management (09h): Supported LBA-Change 00:09:46.120 Unknown (0Ch): Supported 00:09:46.120 Unknown (12h): Supported 00:09:46.120 Copy (19h): Supported LBA-Change 00:09:46.120 Unknown (1Dh): Supported LBA-Change 00:09:46.120 00:09:46.120 Error Log 00:09:46.120 ========= 00:09:46.120 00:09:46.120 Arbitration 00:09:46.120 =========== 00:09:46.120 Arbitration Burst: no limit 00:09:46.120 00:09:46.120 Power Management 00:09:46.120 ================ 00:09:46.120 Number of Power States: 1 00:09:46.120 Current Power State: Power State #0 00:09:46.120 Power State #0: 00:09:46.120 Max Power: 25.00 W 00:09:46.120 Non-Operational State: Operational 00:09:46.120 Entry Latency: 16 microseconds 00:09:46.120 Exit Latency: 4 microseconds 00:09:46.120 Relative Read Throughput: 0 00:09:46.120 Relative Read Latency: 0 00:09:46.120 Relative Write Throughput: 0 00:09:46.120 Relative Write Latency: 0 00:09:46.120 Idle Power: Not Reported 00:09:46.120 Active Power: Not Reported 00:09:46.120 Non-Operational Permissive Mode: Not Supported 00:09:46.120 00:09:46.120 Health Information 00:09:46.120 ================== 00:09:46.120 Critical Warnings: 00:09:46.120 Available Spare Space: OK 00:09:46.120 Temperature: OK 00:09:46.120 Device Reliability: OK 00:09:46.120 Read Only: No 00:09:46.120 Volatile Memory Backup: OK 00:09:46.120 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.120 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.120 Available Spare: 0% 00:09:46.120 Available Spare Threshold: 0% 00:09:46.120 Life Percentage Used: 0% 00:09:46.121 Data Units Read: 2511 00:09:46.121 Data Units Written: 2299 00:09:46.121 Host Read Commands: 104580 00:09:46.121 Host Write Commands: 102849 00:09:46.121 Controller Busy Time: 0 minutes 00:09:46.121 Power Cycles: 0 00:09:46.121 Power On Hours: 0 hours 00:09:46.121 Unsafe Shutdowns: 0 00:09:46.121 Unrecoverable Media Errors: 0 00:09:46.121 Lifetime Error Log Entries: 0 00:09:46.121 Warning Temperature Time: 0 minutes 00:09:46.121 Critical Temperature Time: 0 minutes 00:09:46.121 00:09:46.121 Number of Queues 00:09:46.121 ================ 00:09:46.121 Number of I/O Submission Queues: 64 00:09:46.121 Number of I/O Completion Queues: 64 00:09:46.121 00:09:46.121 ZNS Specific Controller Data 00:09:46.121 ============================ 00:09:46.121 Zone Append Size Limit: 0 00:09:46.121 00:09:46.121 00:09:46.121 Active Namespaces 00:09:46.121 ================= 00:09:46.121 Namespace ID:1 00:09:46.121 Error Recovery Timeout: Unlimited 00:09:46.121 Command Set Identifier: NVM (00h) 00:09:46.121 Deallocate: Supported 00:09:46.121 Deallocated/Unwritten Error: Supported 00:09:46.121 Deallocated Read Value: All 0x00 00:09:46.121 Deallocate in Write Zeroes: Not Supported 00:09:46.121 Deallocated Guard Field: 0xFFFF 00:09:46.121 Flush: Supported 00:09:46.121 Reservation: Not Supported 00:09:46.121 Namespace Sharing Capabilities: Private 00:09:46.121 Size (in LBAs): 1048576 (4GiB) 00:09:46.121 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.121 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.121 Thin Provisioning: Not Supported 00:09:46.121 Per-NS Atomic Units: No 00:09:46.121 Maximum Single Source Range Length: 128 00:09:46.121 Maximum Copy Length: 128 00:09:46.121 Maximum Source Range Count: 128 00:09:46.121 NGUID/EUI64 Never Reused: No 00:09:46.121 Namespace Write Protected: No 00:09:46.121 Number of LBA Formats: 8 00:09:46.121 Current LBA Format: LBA Format #04 00:09:46.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.121 00:09:46.121 NVM Specific Namespace Data 00:09:46.121 =========================== 00:09:46.121 Logical Block Storage Tag Mask: 0 00:09:46.121 Protection Information Capabilities: 00:09:46.121 16b Guard Protection Information Storage Tag Support: No 00:09:46.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.121 Storage Tag Check Read Support: No 00:09:46.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Namespace ID:2 00:09:46.121 Error Recovery Timeout: Unlimited 00:09:46.121 Command Set Identifier: NVM (00h) 00:09:46.121 Deallocate: Supported 00:09:46.121 Deallocated/Unwritten Error: Supported 00:09:46.121 Deallocated Read Value: All 0x00 00:09:46.121 Deallocate in Write Zeroes: Not Supported 00:09:46.121 Deallocated Guard Field: 0xFFFF 00:09:46.121 Flush: Supported 00:09:46.121 Reservation: Not Supported 00:09:46.121 Namespace Sharing Capabilities: Private 00:09:46.121 Size (in LBAs): 1048576 (4GiB) 00:09:46.121 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.121 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.121 Thin Provisioning: Not Supported 00:09:46.121 Per-NS Atomic Units: No 00:09:46.121 Maximum Single Source Range Length: 128 00:09:46.121 Maximum Copy Length: 128 00:09:46.121 Maximum Source Range Count: 128 00:09:46.121 NGUID/EUI64 Never Reused: No 00:09:46.121 Namespace Write Protected: No 00:09:46.121 Number of LBA Formats: 8 00:09:46.121 Current LBA Format: LBA Format #04 00:09:46.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.121 00:09:46.121 NVM Specific Namespace Data 00:09:46.121 =========================== 00:09:46.121 Logical Block Storage Tag Mask: 0 00:09:46.121 Protection Information Capabilities: 00:09:46.121 16b Guard Protection Information Storage Tag Support: No 00:09:46.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.121 Storage Tag Check Read Support: No 00:09:46.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.121 Namespace ID:3 00:09:46.121 Error Recovery Timeout: Unlimited 00:09:46.121 Command Set Identifier: NVM (00h) 00:09:46.121 Deallocate: Supported 00:09:46.121 Deallocated/Unwritten Error: Supported 00:09:46.121 Deallocated Read Value: All 0x00 00:09:46.121 Deallocate in Write Zeroes: Not Supported 00:09:46.121 Deallocated Guard Field: 0xFFFF 00:09:46.121 Flush: Supported 00:09:46.121 Reservation: Not Supported 00:09:46.121 Namespace Sharing Capabilities: Private 00:09:46.121 Size (in LBAs): 1048576 (4GiB) 00:09:46.121 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.121 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.121 Thin Provisioning: Not Supported 00:09:46.121 Per-NS Atomic Units: No 00:09:46.121 Maximum Single Source Range Length: 128 00:09:46.121 Maximum Copy Length: 128 00:09:46.121 Maximum Source Range Count: 128 00:09:46.121 NGUID/EUI64 Never Reused: No 00:09:46.121 Namespace Write Protected: No 00:09:46.121 Number of LBA Formats: 8 00:09:46.121 Current LBA Format: LBA Format #04 00:09:46.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.122 00:09:46.122 NVM Specific Namespace Data 00:09:46.122 =========================== 00:09:46.122 Logical Block Storage Tag Mask: 0 00:09:46.122 Protection Information Capabilities: 00:09:46.122 16b Guard Protection Information Storage Tag Support: No 00:09:46.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.122 Storage Tag Check Read Support: No 00:09:46.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.122 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:46.122 11:53:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:46.381 ===================================================== 00:09:46.381 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:46.381 ===================================================== 00:09:46.381 Controller Capabilities/Features 00:09:46.381 ================================ 00:09:46.381 Vendor ID: 1b36 00:09:46.381 Subsystem Vendor ID: 1af4 00:09:46.381 Serial Number: 12340 00:09:46.381 Model Number: QEMU NVMe Ctrl 00:09:46.381 Firmware Version: 8.0.0 00:09:46.381 Recommended Arb Burst: 6 00:09:46.381 IEEE OUI Identifier: 00 54 52 00:09:46.381 Multi-path I/O 00:09:46.381 May have multiple subsystem ports: No 00:09:46.381 May have multiple controllers: No 00:09:46.381 Associated with SR-IOV VF: No 00:09:46.381 Max Data Transfer Size: 524288 00:09:46.381 Max Number of Namespaces: 256 00:09:46.381 Max Number of I/O Queues: 64 00:09:46.381 NVMe Specification Version (VS): 1.4 00:09:46.381 NVMe Specification Version (Identify): 1.4 00:09:46.381 Maximum Queue Entries: 2048 00:09:46.382 Contiguous Queues Required: Yes 00:09:46.382 Arbitration Mechanisms Supported 00:09:46.382 Weighted Round Robin: Not Supported 00:09:46.382 Vendor Specific: Not Supported 00:09:46.382 Reset Timeout: 7500 ms 00:09:46.382 Doorbell Stride: 4 bytes 00:09:46.382 NVM Subsystem Reset: Not Supported 00:09:46.382 Command Sets Supported 00:09:46.382 NVM Command Set: Supported 00:09:46.382 Boot Partition: Not Supported 00:09:46.382 Memory Page Size Minimum: 4096 bytes 00:09:46.382 Memory Page Size Maximum: 65536 bytes 00:09:46.382 Persistent Memory Region: Not Supported 00:09:46.382 Optional Asynchronous Events Supported 00:09:46.382 Namespace Attribute Notices: Supported 00:09:46.382 Firmware Activation Notices: Not Supported 00:09:46.382 ANA Change Notices: Not Supported 00:09:46.382 PLE Aggregate Log Change Notices: Not Supported 00:09:46.382 LBA Status Info Alert Notices: Not Supported 00:09:46.382 EGE Aggregate Log Change Notices: Not Supported 00:09:46.382 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.382 Zone Descriptor Change Notices: Not Supported 00:09:46.382 Discovery Log Change Notices: Not Supported 00:09:46.382 Controller Attributes 00:09:46.382 128-bit Host Identifier: Not Supported 00:09:46.382 Non-Operational Permissive Mode: Not Supported 00:09:46.382 NVM Sets: Not Supported 00:09:46.382 Read Recovery Levels: Not Supported 00:09:46.382 Endurance Groups: Not Supported 00:09:46.382 Predictable Latency Mode: Not Supported 00:09:46.382 Traffic Based Keep ALive: Not Supported 00:09:46.382 Namespace Granularity: Not Supported 00:09:46.382 SQ Associations: Not Supported 00:09:46.382 UUID List: Not Supported 00:09:46.382 Multi-Domain Subsystem: Not Supported 00:09:46.382 Fixed Capacity Management: Not Supported 00:09:46.382 Variable Capacity Management: Not Supported 00:09:46.382 Delete Endurance Group: Not Supported 00:09:46.382 Delete NVM Set: Not Supported 00:09:46.382 Extended LBA Formats Supported: Supported 00:09:46.382 Flexible Data Placement Supported: Not Supported 00:09:46.382 00:09:46.382 Controller Memory Buffer Support 00:09:46.382 ================================ 00:09:46.382 Supported: No 00:09:46.382 00:09:46.382 Persistent Memory Region Support 00:09:46.382 ================================ 00:09:46.382 Supported: No 00:09:46.382 00:09:46.382 Admin Command Set Attributes 00:09:46.382 ============================ 00:09:46.382 Security Send/Receive: Not Supported 00:09:46.382 Format NVM: Supported 00:09:46.382 Firmware Activate/Download: Not Supported 00:09:46.382 Namespace Management: Supported 00:09:46.382 Device Self-Test: Not Supported 00:09:46.382 Directives: Supported 00:09:46.382 NVMe-MI: Not Supported 00:09:46.382 Virtualization Management: Not Supported 00:09:46.382 Doorbell Buffer Config: Supported 00:09:46.382 Get LBA Status Capability: Not Supported 00:09:46.382 Command & Feature Lockdown Capability: Not Supported 00:09:46.382 Abort Command Limit: 4 00:09:46.382 Async Event Request Limit: 4 00:09:46.382 Number of Firmware Slots: N/A 00:09:46.382 Firmware Slot 1 Read-Only: N/A 00:09:46.382 Firmware Activation Without Reset: N/A 00:09:46.382 Multiple Update Detection Support: N/A 00:09:46.382 Firmware Update Granularity: No Information Provided 00:09:46.382 Per-Namespace SMART Log: Yes 00:09:46.382 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.382 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:46.382 Command Effects Log Page: Supported 00:09:46.382 Get Log Page Extended Data: Supported 00:09:46.382 Telemetry Log Pages: Not Supported 00:09:46.382 Persistent Event Log Pages: Not Supported 00:09:46.382 Supported Log Pages Log Page: May Support 00:09:46.382 Commands Supported & Effects Log Page: Not Supported 00:09:46.382 Feature Identifiers & Effects Log Page:May Support 00:09:46.382 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.382 Data Area 4 for Telemetry Log: Not Supported 00:09:46.382 Error Log Page Entries Supported: 1 00:09:46.382 Keep Alive: Not Supported 00:09:46.382 00:09:46.382 NVM Command Set Attributes 00:09:46.382 ========================== 00:09:46.382 Submission Queue Entry Size 00:09:46.382 Max: 64 00:09:46.382 Min: 64 00:09:46.382 Completion Queue Entry Size 00:09:46.382 Max: 16 00:09:46.382 Min: 16 00:09:46.382 Number of Namespaces: 256 00:09:46.382 Compare Command: Supported 00:09:46.382 Write Uncorrectable Command: Not Supported 00:09:46.382 Dataset Management Command: Supported 00:09:46.382 Write Zeroes Command: Supported 00:09:46.382 Set Features Save Field: Supported 00:09:46.382 Reservations: Not Supported 00:09:46.382 Timestamp: Supported 00:09:46.382 Copy: Supported 00:09:46.382 Volatile Write Cache: Present 00:09:46.382 Atomic Write Unit (Normal): 1 00:09:46.382 Atomic Write Unit (PFail): 1 00:09:46.382 Atomic Compare & Write Unit: 1 00:09:46.382 Fused Compare & Write: Not Supported 00:09:46.382 Scatter-Gather List 00:09:46.382 SGL Command Set: Supported 00:09:46.382 SGL Keyed: Not Supported 00:09:46.382 SGL Bit Bucket Descriptor: Not Supported 00:09:46.382 SGL Metadata Pointer: Not Supported 00:09:46.382 Oversized SGL: Not Supported 00:09:46.382 SGL Metadata Address: Not Supported 00:09:46.382 SGL Offset: Not Supported 00:09:46.382 Transport SGL Data Block: Not Supported 00:09:46.382 Replay Protected Memory Block: Not Supported 00:09:46.382 00:09:46.382 Firmware Slot Information 00:09:46.382 ========================= 00:09:46.382 Active slot: 1 00:09:46.382 Slot 1 Firmware Revision: 1.0 00:09:46.382 00:09:46.382 00:09:46.382 Commands Supported and Effects 00:09:46.382 ============================== 00:09:46.382 Admin Commands 00:09:46.382 -------------- 00:09:46.382 Delete I/O Submission Queue (00h): Supported 00:09:46.382 Create I/O Submission Queue (01h): Supported 00:09:46.382 Get Log Page (02h): Supported 00:09:46.382 Delete I/O Completion Queue (04h): Supported 00:09:46.382 Create I/O Completion Queue (05h): Supported 00:09:46.382 Identify (06h): Supported 00:09:46.382 Abort (08h): Supported 00:09:46.382 Set Features (09h): Supported 00:09:46.382 Get Features (0Ah): Supported 00:09:46.382 Asynchronous Event Request (0Ch): Supported 00:09:46.382 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.382 Directive Send (19h): Supported 00:09:46.382 Directive Receive (1Ah): Supported 00:09:46.382 Virtualization Management (1Ch): Supported 00:09:46.382 Doorbell Buffer Config (7Ch): Supported 00:09:46.382 Format NVM (80h): Supported LBA-Change 00:09:46.382 I/O Commands 00:09:46.382 ------------ 00:09:46.382 Flush (00h): Supported LBA-Change 00:09:46.383 Write (01h): Supported LBA-Change 00:09:46.383 Read (02h): Supported 00:09:46.383 Compare (05h): Supported 00:09:46.383 Write Zeroes (08h): Supported LBA-Change 00:09:46.383 Dataset Management (09h): Supported LBA-Change 00:09:46.383 Unknown (0Ch): Supported 00:09:46.383 Unknown (12h): Supported 00:09:46.383 Copy (19h): Supported LBA-Change 00:09:46.383 Unknown (1Dh): Supported LBA-Change 00:09:46.383 00:09:46.383 Error Log 00:09:46.383 ========= 00:09:46.383 00:09:46.383 Arbitration 00:09:46.383 =========== 00:09:46.383 Arbitration Burst: no limit 00:09:46.383 00:09:46.383 Power Management 00:09:46.383 ================ 00:09:46.383 Number of Power States: 1 00:09:46.383 Current Power State: Power State #0 00:09:46.383 Power State #0: 00:09:46.383 Max Power: 25.00 W 00:09:46.383 Non-Operational State: Operational 00:09:46.383 Entry Latency: 16 microseconds 00:09:46.383 Exit Latency: 4 microseconds 00:09:46.383 Relative Read Throughput: 0 00:09:46.383 Relative Read Latency: 0 00:09:46.383 Relative Write Throughput: 0 00:09:46.383 Relative Write Latency: 0 00:09:46.383 Idle Power: Not Reported 00:09:46.383 Active Power: Not Reported 00:09:46.383 Non-Operational Permissive Mode: Not Supported 00:09:46.383 00:09:46.383 Health Information 00:09:46.383 ================== 00:09:46.383 Critical Warnings: 00:09:46.383 Available Spare Space: OK 00:09:46.383 Temperature: OK 00:09:46.383 Device Reliability: OK 00:09:46.383 Read Only: No 00:09:46.383 Volatile Memory Backup: OK 00:09:46.383 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.383 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.383 Available Spare: 0% 00:09:46.383 Available Spare Threshold: 0% 00:09:46.383 Life Percentage Used: 0% 00:09:46.383 Data Units Read: 760 00:09:46.383 Data Units Written: 688 00:09:46.383 Host Read Commands: 33977 00:09:46.383 Host Write Commands: 33763 00:09:46.383 Controller Busy Time: 0 minutes 00:09:46.383 Power Cycles: 0 00:09:46.383 Power On Hours: 0 hours 00:09:46.383 Unsafe Shutdowns: 0 00:09:46.383 Unrecoverable Media Errors: 0 00:09:46.383 Lifetime Error Log Entries: 0 00:09:46.383 Warning Temperature Time: 0 minutes 00:09:46.383 Critical Temperature Time: 0 minutes 00:09:46.383 00:09:46.383 Number of Queues 00:09:46.383 ================ 00:09:46.383 Number of I/O Submission Queues: 64 00:09:46.383 Number of I/O Completion Queues: 64 00:09:46.383 00:09:46.383 ZNS Specific Controller Data 00:09:46.383 ============================ 00:09:46.383 Zone Append Size Limit: 0 00:09:46.383 00:09:46.383 00:09:46.383 Active Namespaces 00:09:46.383 ================= 00:09:46.383 Namespace ID:1 00:09:46.383 Error Recovery Timeout: Unlimited 00:09:46.383 Command Set Identifier: NVM (00h) 00:09:46.383 Deallocate: Supported 00:09:46.383 Deallocated/Unwritten Error: Supported 00:09:46.383 Deallocated Read Value: All 0x00 00:09:46.383 Deallocate in Write Zeroes: Not Supported 00:09:46.383 Deallocated Guard Field: 0xFFFF 00:09:46.383 Flush: Supported 00:09:46.383 Reservation: Not Supported 00:09:46.383 Metadata Transferred as: Separate Metadata Buffer 00:09:46.383 Namespace Sharing Capabilities: Private 00:09:46.383 Size (in LBAs): 1548666 (5GiB) 00:09:46.383 Capacity (in LBAs): 1548666 (5GiB) 00:09:46.383 Utilization (in LBAs): 1548666 (5GiB) 00:09:46.383 Thin Provisioning: Not Supported 00:09:46.383 Per-NS Atomic Units: No 00:09:46.383 Maximum Single Source Range Length: 128 00:09:46.383 Maximum Copy Length: 128 00:09:46.383 Maximum Source Range Count: 128 00:09:46.383 NGUID/EUI64 Never Reused: No 00:09:46.383 Namespace Write Protected: No 00:09:46.383 Number of LBA Formats: 8 00:09:46.383 Current LBA Format: LBA Format #07 00:09:46.383 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.383 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.383 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.383 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.383 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.383 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.383 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.383 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.383 00:09:46.383 NVM Specific Namespace Data 00:09:46.383 =========================== 00:09:46.383 Logical Block Storage Tag Mask: 0 00:09:46.383 Protection Information Capabilities: 00:09:46.383 16b Guard Protection Information Storage Tag Support: No 00:09:46.383 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.383 Storage Tag Check Read Support: No 00:09:46.383 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.383 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:46.383 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:46.643 ===================================================== 00:09:46.643 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:46.643 ===================================================== 00:09:46.643 Controller Capabilities/Features 00:09:46.643 ================================ 00:09:46.643 Vendor ID: 1b36 00:09:46.643 Subsystem Vendor ID: 1af4 00:09:46.643 Serial Number: 12341 00:09:46.643 Model Number: QEMU NVMe Ctrl 00:09:46.643 Firmware Version: 8.0.0 00:09:46.643 Recommended Arb Burst: 6 00:09:46.643 IEEE OUI Identifier: 00 54 52 00:09:46.643 Multi-path I/O 00:09:46.643 May have multiple subsystem ports: No 00:09:46.643 May have multiple controllers: No 00:09:46.643 Associated with SR-IOV VF: No 00:09:46.643 Max Data Transfer Size: 524288 00:09:46.643 Max Number of Namespaces: 256 00:09:46.643 Max Number of I/O Queues: 64 00:09:46.643 NVMe Specification Version (VS): 1.4 00:09:46.643 NVMe Specification Version (Identify): 1.4 00:09:46.643 Maximum Queue Entries: 2048 00:09:46.643 Contiguous Queues Required: Yes 00:09:46.643 Arbitration Mechanisms Supported 00:09:46.643 Weighted Round Robin: Not Supported 00:09:46.643 Vendor Specific: Not Supported 00:09:46.643 Reset Timeout: 7500 ms 00:09:46.643 Doorbell Stride: 4 bytes 00:09:46.643 NVM Subsystem Reset: Not Supported 00:09:46.643 Command Sets Supported 00:09:46.643 NVM Command Set: Supported 00:09:46.643 Boot Partition: Not Supported 00:09:46.643 Memory Page Size Minimum: 4096 bytes 00:09:46.643 Memory Page Size Maximum: 65536 bytes 00:09:46.643 Persistent Memory Region: Not Supported 00:09:46.643 Optional Asynchronous Events Supported 00:09:46.643 Namespace Attribute Notices: Supported 00:09:46.643 Firmware Activation Notices: Not Supported 00:09:46.643 ANA Change Notices: Not Supported 00:09:46.643 PLE Aggregate Log Change Notices: Not Supported 00:09:46.643 LBA Status Info Alert Notices: Not Supported 00:09:46.643 EGE Aggregate Log Change Notices: Not Supported 00:09:46.643 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.643 Zone Descriptor Change Notices: Not Supported 00:09:46.643 Discovery Log Change Notices: Not Supported 00:09:46.643 Controller Attributes 00:09:46.643 128-bit Host Identifier: Not Supported 00:09:46.643 Non-Operational Permissive Mode: Not Supported 00:09:46.643 NVM Sets: Not Supported 00:09:46.643 Read Recovery Levels: Not Supported 00:09:46.643 Endurance Groups: Not Supported 00:09:46.643 Predictable Latency Mode: Not Supported 00:09:46.643 Traffic Based Keep ALive: Not Supported 00:09:46.643 Namespace Granularity: Not Supported 00:09:46.643 SQ Associations: Not Supported 00:09:46.643 UUID List: Not Supported 00:09:46.643 Multi-Domain Subsystem: Not Supported 00:09:46.644 Fixed Capacity Management: Not Supported 00:09:46.644 Variable Capacity Management: Not Supported 00:09:46.644 Delete Endurance Group: Not Supported 00:09:46.644 Delete NVM Set: Not Supported 00:09:46.644 Extended LBA Formats Supported: Supported 00:09:46.644 Flexible Data Placement Supported: Not Supported 00:09:46.644 00:09:46.644 Controller Memory Buffer Support 00:09:46.644 ================================ 00:09:46.644 Supported: No 00:09:46.644 00:09:46.644 Persistent Memory Region Support 00:09:46.644 ================================ 00:09:46.644 Supported: No 00:09:46.644 00:09:46.644 Admin Command Set Attributes 00:09:46.644 ============================ 00:09:46.644 Security Send/Receive: Not Supported 00:09:46.644 Format NVM: Supported 00:09:46.644 Firmware Activate/Download: Not Supported 00:09:46.644 Namespace Management: Supported 00:09:46.644 Device Self-Test: Not Supported 00:09:46.644 Directives: Supported 00:09:46.644 NVMe-MI: Not Supported 00:09:46.644 Virtualization Management: Not Supported 00:09:46.644 Doorbell Buffer Config: Supported 00:09:46.644 Get LBA Status Capability: Not Supported 00:09:46.644 Command & Feature Lockdown Capability: Not Supported 00:09:46.644 Abort Command Limit: 4 00:09:46.644 Async Event Request Limit: 4 00:09:46.644 Number of Firmware Slots: N/A 00:09:46.644 Firmware Slot 1 Read-Only: N/A 00:09:46.644 Firmware Activation Without Reset: N/A 00:09:46.644 Multiple Update Detection Support: N/A 00:09:46.644 Firmware Update Granularity: No Information Provided 00:09:46.644 Per-Namespace SMART Log: Yes 00:09:46.644 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.644 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:46.644 Command Effects Log Page: Supported 00:09:46.644 Get Log Page Extended Data: Supported 00:09:46.644 Telemetry Log Pages: Not Supported 00:09:46.644 Persistent Event Log Pages: Not Supported 00:09:46.644 Supported Log Pages Log Page: May Support 00:09:46.644 Commands Supported & Effects Log Page: Not Supported 00:09:46.644 Feature Identifiers & Effects Log Page:May Support 00:09:46.644 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.644 Data Area 4 for Telemetry Log: Not Supported 00:09:46.644 Error Log Page Entries Supported: 1 00:09:46.644 Keep Alive: Not Supported 00:09:46.644 00:09:46.644 NVM Command Set Attributes 00:09:46.644 ========================== 00:09:46.644 Submission Queue Entry Size 00:09:46.644 Max: 64 00:09:46.644 Min: 64 00:09:46.644 Completion Queue Entry Size 00:09:46.644 Max: 16 00:09:46.644 Min: 16 00:09:46.644 Number of Namespaces: 256 00:09:46.644 Compare Command: Supported 00:09:46.644 Write Uncorrectable Command: Not Supported 00:09:46.644 Dataset Management Command: Supported 00:09:46.644 Write Zeroes Command: Supported 00:09:46.644 Set Features Save Field: Supported 00:09:46.644 Reservations: Not Supported 00:09:46.644 Timestamp: Supported 00:09:46.644 Copy: Supported 00:09:46.644 Volatile Write Cache: Present 00:09:46.644 Atomic Write Unit (Normal): 1 00:09:46.644 Atomic Write Unit (PFail): 1 00:09:46.644 Atomic Compare & Write Unit: 1 00:09:46.644 Fused Compare & Write: Not Supported 00:09:46.644 Scatter-Gather List 00:09:46.644 SGL Command Set: Supported 00:09:46.644 SGL Keyed: Not Supported 00:09:46.644 SGL Bit Bucket Descriptor: Not Supported 00:09:46.644 SGL Metadata Pointer: Not Supported 00:09:46.644 Oversized SGL: Not Supported 00:09:46.644 SGL Metadata Address: Not Supported 00:09:46.644 SGL Offset: Not Supported 00:09:46.644 Transport SGL Data Block: Not Supported 00:09:46.644 Replay Protected Memory Block: Not Supported 00:09:46.644 00:09:46.644 Firmware Slot Information 00:09:46.644 ========================= 00:09:46.644 Active slot: 1 00:09:46.644 Slot 1 Firmware Revision: 1.0 00:09:46.644 00:09:46.644 00:09:46.644 Commands Supported and Effects 00:09:46.644 ============================== 00:09:46.644 Admin Commands 00:09:46.644 -------------- 00:09:46.644 Delete I/O Submission Queue (00h): Supported 00:09:46.644 Create I/O Submission Queue (01h): Supported 00:09:46.644 Get Log Page (02h): Supported 00:09:46.644 Delete I/O Completion Queue (04h): Supported 00:09:46.644 Create I/O Completion Queue (05h): Supported 00:09:46.644 Identify (06h): Supported 00:09:46.644 Abort (08h): Supported 00:09:46.644 Set Features (09h): Supported 00:09:46.644 Get Features (0Ah): Supported 00:09:46.644 Asynchronous Event Request (0Ch): Supported 00:09:46.644 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.644 Directive Send (19h): Supported 00:09:46.644 Directive Receive (1Ah): Supported 00:09:46.644 Virtualization Management (1Ch): Supported 00:09:46.644 Doorbell Buffer Config (7Ch): Supported 00:09:46.644 Format NVM (80h): Supported LBA-Change 00:09:46.644 I/O Commands 00:09:46.644 ------------ 00:09:46.644 Flush (00h): Supported LBA-Change 00:09:46.644 Write (01h): Supported LBA-Change 00:09:46.644 Read (02h): Supported 00:09:46.644 Compare (05h): Supported 00:09:46.644 Write Zeroes (08h): Supported LBA-Change 00:09:46.644 Dataset Management (09h): Supported LBA-Change 00:09:46.644 Unknown (0Ch): Supported 00:09:46.644 Unknown (12h): Supported 00:09:46.644 Copy (19h): Supported LBA-Change 00:09:46.644 Unknown (1Dh): Supported LBA-Change 00:09:46.644 00:09:46.644 Error Log 00:09:46.644 ========= 00:09:46.644 00:09:46.644 Arbitration 00:09:46.644 =========== 00:09:46.644 Arbitration Burst: no limit 00:09:46.644 00:09:46.644 Power Management 00:09:46.644 ================ 00:09:46.644 Number of Power States: 1 00:09:46.644 Current Power State: Power State #0 00:09:46.644 Power State #0: 00:09:46.644 Max Power: 25.00 W 00:09:46.644 Non-Operational State: Operational 00:09:46.644 Entry Latency: 16 microseconds 00:09:46.644 Exit Latency: 4 microseconds 00:09:46.644 Relative Read Throughput: 0 00:09:46.644 Relative Read Latency: 0 00:09:46.644 Relative Write Throughput: 0 00:09:46.644 Relative Write Latency: 0 00:09:46.644 Idle Power: Not Reported 00:09:46.644 Active Power: Not Reported 00:09:46.644 Non-Operational Permissive Mode: Not Supported 00:09:46.644 00:09:46.644 Health Information 00:09:46.644 ================== 00:09:46.644 Critical Warnings: 00:09:46.645 Available Spare Space: OK 00:09:46.645 Temperature: OK 00:09:46.645 Device Reliability: OK 00:09:46.645 Read Only: No 00:09:46.645 Volatile Memory Backup: OK 00:09:46.645 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.645 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.645 Available Spare: 0% 00:09:46.645 Available Spare Threshold: 0% 00:09:46.645 Life Percentage Used: 0% 00:09:46.645 Data Units Read: 1157 00:09:46.645 Data Units Written: 1022 00:09:46.645 Host Read Commands: 48962 00:09:46.645 Host Write Commands: 47711 00:09:46.645 Controller Busy Time: 0 minutes 00:09:46.645 Power Cycles: 0 00:09:46.645 Power On Hours: 0 hours 00:09:46.645 Unsafe Shutdowns: 0 00:09:46.645 Unrecoverable Media Errors: 0 00:09:46.645 Lifetime Error Log Entries: 0 00:09:46.645 Warning Temperature Time: 0 minutes 00:09:46.645 Critical Temperature Time: 0 minutes 00:09:46.645 00:09:46.645 Number of Queues 00:09:46.645 ================ 00:09:46.645 Number of I/O Submission Queues: 64 00:09:46.645 Number of I/O Completion Queues: 64 00:09:46.645 00:09:46.645 ZNS Specific Controller Data 00:09:46.645 ============================ 00:09:46.645 Zone Append Size Limit: 0 00:09:46.645 00:09:46.645 00:09:46.645 Active Namespaces 00:09:46.645 ================= 00:09:46.645 Namespace ID:1 00:09:46.645 Error Recovery Timeout: Unlimited 00:09:46.645 Command Set Identifier: NVM (00h) 00:09:46.645 Deallocate: Supported 00:09:46.645 Deallocated/Unwritten Error: Supported 00:09:46.645 Deallocated Read Value: All 0x00 00:09:46.645 Deallocate in Write Zeroes: Not Supported 00:09:46.645 Deallocated Guard Field: 0xFFFF 00:09:46.645 Flush: Supported 00:09:46.645 Reservation: Not Supported 00:09:46.645 Namespace Sharing Capabilities: Private 00:09:46.645 Size (in LBAs): 1310720 (5GiB) 00:09:46.645 Capacity (in LBAs): 1310720 (5GiB) 00:09:46.645 Utilization (in LBAs): 1310720 (5GiB) 00:09:46.645 Thin Provisioning: Not Supported 00:09:46.645 Per-NS Atomic Units: No 00:09:46.645 Maximum Single Source Range Length: 128 00:09:46.645 Maximum Copy Length: 128 00:09:46.645 Maximum Source Range Count: 128 00:09:46.645 NGUID/EUI64 Never Reused: No 00:09:46.645 Namespace Write Protected: No 00:09:46.645 Number of LBA Formats: 8 00:09:46.645 Current LBA Format: LBA Format #04 00:09:46.645 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.645 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.645 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.645 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.645 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.645 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.645 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.645 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.645 00:09:46.645 NVM Specific Namespace Data 00:09:46.645 =========================== 00:09:46.645 Logical Block Storage Tag Mask: 0 00:09:46.645 Protection Information Capabilities: 00:09:46.645 16b Guard Protection Information Storage Tag Support: No 00:09:46.645 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.645 Storage Tag Check Read Support: No 00:09:46.645 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.645 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:46.645 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:46.906 ===================================================== 00:09:46.906 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:46.906 ===================================================== 00:09:46.906 Controller Capabilities/Features 00:09:46.906 ================================ 00:09:46.906 Vendor ID: 1b36 00:09:46.906 Subsystem Vendor ID: 1af4 00:09:46.906 Serial Number: 12342 00:09:46.906 Model Number: QEMU NVMe Ctrl 00:09:46.906 Firmware Version: 8.0.0 00:09:46.906 Recommended Arb Burst: 6 00:09:46.906 IEEE OUI Identifier: 00 54 52 00:09:46.906 Multi-path I/O 00:09:46.906 May have multiple subsystem ports: No 00:09:46.906 May have multiple controllers: No 00:09:46.906 Associated with SR-IOV VF: No 00:09:46.906 Max Data Transfer Size: 524288 00:09:46.906 Max Number of Namespaces: 256 00:09:46.906 Max Number of I/O Queues: 64 00:09:46.906 NVMe Specification Version (VS): 1.4 00:09:46.906 NVMe Specification Version (Identify): 1.4 00:09:46.906 Maximum Queue Entries: 2048 00:09:46.906 Contiguous Queues Required: Yes 00:09:46.906 Arbitration Mechanisms Supported 00:09:46.906 Weighted Round Robin: Not Supported 00:09:46.906 Vendor Specific: Not Supported 00:09:46.906 Reset Timeout: 7500 ms 00:09:46.906 Doorbell Stride: 4 bytes 00:09:46.906 NVM Subsystem Reset: Not Supported 00:09:46.906 Command Sets Supported 00:09:46.906 NVM Command Set: Supported 00:09:46.906 Boot Partition: Not Supported 00:09:46.906 Memory Page Size Minimum: 4096 bytes 00:09:46.906 Memory Page Size Maximum: 65536 bytes 00:09:46.906 Persistent Memory Region: Not Supported 00:09:46.906 Optional Asynchronous Events Supported 00:09:46.906 Namespace Attribute Notices: Supported 00:09:46.906 Firmware Activation Notices: Not Supported 00:09:46.906 ANA Change Notices: Not Supported 00:09:46.906 PLE Aggregate Log Change Notices: Not Supported 00:09:46.906 LBA Status Info Alert Notices: Not Supported 00:09:46.906 EGE Aggregate Log Change Notices: Not Supported 00:09:46.906 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.906 Zone Descriptor Change Notices: Not Supported 00:09:46.906 Discovery Log Change Notices: Not Supported 00:09:46.906 Controller Attributes 00:09:46.906 128-bit Host Identifier: Not Supported 00:09:46.906 Non-Operational Permissive Mode: Not Supported 00:09:46.906 NVM Sets: Not Supported 00:09:46.906 Read Recovery Levels: Not Supported 00:09:46.906 Endurance Groups: Not Supported 00:09:46.906 Predictable Latency Mode: Not Supported 00:09:46.906 Traffic Based Keep ALive: Not Supported 00:09:46.906 Namespace Granularity: Not Supported 00:09:46.906 SQ Associations: Not Supported 00:09:46.906 UUID List: Not Supported 00:09:46.906 Multi-Domain Subsystem: Not Supported 00:09:46.906 Fixed Capacity Management: Not Supported 00:09:46.906 Variable Capacity Management: Not Supported 00:09:46.906 Delete Endurance Group: Not Supported 00:09:46.906 Delete NVM Set: Not Supported 00:09:46.906 Extended LBA Formats Supported: Supported 00:09:46.906 Flexible Data Placement Supported: Not Supported 00:09:46.906 00:09:46.906 Controller Memory Buffer Support 00:09:46.906 ================================ 00:09:46.906 Supported: No 00:09:46.906 00:09:46.906 Persistent Memory Region Support 00:09:46.906 ================================ 00:09:46.906 Supported: No 00:09:46.906 00:09:46.906 Admin Command Set Attributes 00:09:46.906 ============================ 00:09:46.906 Security Send/Receive: Not Supported 00:09:46.906 Format NVM: Supported 00:09:46.906 Firmware Activate/Download: Not Supported 00:09:46.906 Namespace Management: Supported 00:09:46.906 Device Self-Test: Not Supported 00:09:46.906 Directives: Supported 00:09:46.906 NVMe-MI: Not Supported 00:09:46.906 Virtualization Management: Not Supported 00:09:46.906 Doorbell Buffer Config: Supported 00:09:46.906 Get LBA Status Capability: Not Supported 00:09:46.906 Command & Feature Lockdown Capability: Not Supported 00:09:46.906 Abort Command Limit: 4 00:09:46.906 Async Event Request Limit: 4 00:09:46.906 Number of Firmware Slots: N/A 00:09:46.906 Firmware Slot 1 Read-Only: N/A 00:09:46.906 Firmware Activation Without Reset: N/A 00:09:46.906 Multiple Update Detection Support: N/A 00:09:46.906 Firmware Update Granularity: No Information Provided 00:09:46.906 Per-Namespace SMART Log: Yes 00:09:46.906 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.906 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:46.906 Command Effects Log Page: Supported 00:09:46.906 Get Log Page Extended Data: Supported 00:09:46.906 Telemetry Log Pages: Not Supported 00:09:46.906 Persistent Event Log Pages: Not Supported 00:09:46.906 Supported Log Pages Log Page: May Support 00:09:46.906 Commands Supported & Effects Log Page: Not Supported 00:09:46.906 Feature Identifiers & Effects Log Page:May Support 00:09:46.906 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.906 Data Area 4 for Telemetry Log: Not Supported 00:09:46.906 Error Log Page Entries Supported: 1 00:09:46.906 Keep Alive: Not Supported 00:09:46.906 00:09:46.906 NVM Command Set Attributes 00:09:46.906 ========================== 00:09:46.906 Submission Queue Entry Size 00:09:46.906 Max: 64 00:09:46.906 Min: 64 00:09:46.906 Completion Queue Entry Size 00:09:46.906 Max: 16 00:09:46.906 Min: 16 00:09:46.906 Number of Namespaces: 256 00:09:46.906 Compare Command: Supported 00:09:46.906 Write Uncorrectable Command: Not Supported 00:09:46.907 Dataset Management Command: Supported 00:09:46.907 Write Zeroes Command: Supported 00:09:46.907 Set Features Save Field: Supported 00:09:46.907 Reservations: Not Supported 00:09:46.907 Timestamp: Supported 00:09:46.907 Copy: Supported 00:09:46.907 Volatile Write Cache: Present 00:09:46.907 Atomic Write Unit (Normal): 1 00:09:46.907 Atomic Write Unit (PFail): 1 00:09:46.907 Atomic Compare & Write Unit: 1 00:09:46.907 Fused Compare & Write: Not Supported 00:09:46.907 Scatter-Gather List 00:09:46.907 SGL Command Set: Supported 00:09:46.907 SGL Keyed: Not Supported 00:09:46.907 SGL Bit Bucket Descriptor: Not Supported 00:09:46.907 SGL Metadata Pointer: Not Supported 00:09:46.907 Oversized SGL: Not Supported 00:09:46.907 SGL Metadata Address: Not Supported 00:09:46.907 SGL Offset: Not Supported 00:09:46.907 Transport SGL Data Block: Not Supported 00:09:46.907 Replay Protected Memory Block: Not Supported 00:09:46.907 00:09:46.907 Firmware Slot Information 00:09:46.907 ========================= 00:09:46.907 Active slot: 1 00:09:46.907 Slot 1 Firmware Revision: 1.0 00:09:46.907 00:09:46.907 00:09:46.907 Commands Supported and Effects 00:09:46.907 ============================== 00:09:46.907 Admin Commands 00:09:46.907 -------------- 00:09:46.907 Delete I/O Submission Queue (00h): Supported 00:09:46.907 Create I/O Submission Queue (01h): Supported 00:09:46.907 Get Log Page (02h): Supported 00:09:46.907 Delete I/O Completion Queue (04h): Supported 00:09:46.907 Create I/O Completion Queue (05h): Supported 00:09:46.907 Identify (06h): Supported 00:09:46.907 Abort (08h): Supported 00:09:46.907 Set Features (09h): Supported 00:09:46.907 Get Features (0Ah): Supported 00:09:46.907 Asynchronous Event Request (0Ch): Supported 00:09:46.907 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:46.907 Directive Send (19h): Supported 00:09:46.907 Directive Receive (1Ah): Supported 00:09:46.907 Virtualization Management (1Ch): Supported 00:09:46.907 Doorbell Buffer Config (7Ch): Supported 00:09:46.907 Format NVM (80h): Supported LBA-Change 00:09:46.907 I/O Commands 00:09:46.907 ------------ 00:09:46.907 Flush (00h): Supported LBA-Change 00:09:46.907 Write (01h): Supported LBA-Change 00:09:46.907 Read (02h): Supported 00:09:46.907 Compare (05h): Supported 00:09:46.907 Write Zeroes (08h): Supported LBA-Change 00:09:46.907 Dataset Management (09h): Supported LBA-Change 00:09:46.907 Unknown (0Ch): Supported 00:09:46.907 Unknown (12h): Supported 00:09:46.907 Copy (19h): Supported LBA-Change 00:09:46.907 Unknown (1Dh): Supported LBA-Change 00:09:46.907 00:09:46.907 Error Log 00:09:46.907 ========= 00:09:46.907 00:09:46.907 Arbitration 00:09:46.907 =========== 00:09:46.907 Arbitration Burst: no limit 00:09:46.907 00:09:46.907 Power Management 00:09:46.907 ================ 00:09:46.907 Number of Power States: 1 00:09:46.907 Current Power State: Power State #0 00:09:46.907 Power State #0: 00:09:46.907 Max Power: 25.00 W 00:09:46.907 Non-Operational State: Operational 00:09:46.907 Entry Latency: 16 microseconds 00:09:46.907 Exit Latency: 4 microseconds 00:09:46.907 Relative Read Throughput: 0 00:09:46.907 Relative Read Latency: 0 00:09:46.907 Relative Write Throughput: 0 00:09:46.907 Relative Write Latency: 0 00:09:46.907 Idle Power: Not Reported 00:09:46.907 Active Power: Not Reported 00:09:46.907 Non-Operational Permissive Mode: Not Supported 00:09:46.907 00:09:46.907 Health Information 00:09:46.907 ================== 00:09:46.907 Critical Warnings: 00:09:46.907 Available Spare Space: OK 00:09:46.907 Temperature: OK 00:09:46.907 Device Reliability: OK 00:09:46.907 Read Only: No 00:09:46.907 Volatile Memory Backup: OK 00:09:46.907 Current Temperature: 323 Kelvin (50 Celsius) 00:09:46.907 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:46.907 Available Spare: 0% 00:09:46.907 Available Spare Threshold: 0% 00:09:46.907 Life Percentage Used: 0% 00:09:46.907 Data Units Read: 2511 00:09:46.907 Data Units Written: 2299 00:09:46.907 Host Read Commands: 104580 00:09:46.907 Host Write Commands: 102849 00:09:46.907 Controller Busy Time: 0 minutes 00:09:46.907 Power Cycles: 0 00:09:46.907 Power On Hours: 0 hours 00:09:46.907 Unsafe Shutdowns: 0 00:09:46.907 Unrecoverable Media Errors: 0 00:09:46.907 Lifetime Error Log Entries: 0 00:09:46.907 Warning Temperature Time: 0 minutes 00:09:46.907 Critical Temperature Time: 0 minutes 00:09:46.907 00:09:46.907 Number of Queues 00:09:46.907 ================ 00:09:46.907 Number of I/O Submission Queues: 64 00:09:46.907 Number of I/O Completion Queues: 64 00:09:46.907 00:09:46.907 ZNS Specific Controller Data 00:09:46.907 ============================ 00:09:46.907 Zone Append Size Limit: 0 00:09:46.907 00:09:46.907 00:09:46.907 Active Namespaces 00:09:46.907 ================= 00:09:46.907 Namespace ID:1 00:09:46.907 Error Recovery Timeout: Unlimited 00:09:46.907 Command Set Identifier: NVM (00h) 00:09:46.907 Deallocate: Supported 00:09:46.907 Deallocated/Unwritten Error: Supported 00:09:46.907 Deallocated Read Value: All 0x00 00:09:46.907 Deallocate in Write Zeroes: Not Supported 00:09:46.907 Deallocated Guard Field: 0xFFFF 00:09:46.907 Flush: Supported 00:09:46.907 Reservation: Not Supported 00:09:46.907 Namespace Sharing Capabilities: Private 00:09:46.907 Size (in LBAs): 1048576 (4GiB) 00:09:46.907 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.907 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.907 Thin Provisioning: Not Supported 00:09:46.907 Per-NS Atomic Units: No 00:09:46.907 Maximum Single Source Range Length: 128 00:09:46.907 Maximum Copy Length: 128 00:09:46.907 Maximum Source Range Count: 128 00:09:46.907 NGUID/EUI64 Never Reused: No 00:09:46.907 Namespace Write Protected: No 00:09:46.907 Number of LBA Formats: 8 00:09:46.907 Current LBA Format: LBA Format #04 00:09:46.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.908 00:09:46.908 NVM Specific Namespace Data 00:09:46.908 =========================== 00:09:46.908 Logical Block Storage Tag Mask: 0 00:09:46.908 Protection Information Capabilities: 00:09:46.908 16b Guard Protection Information Storage Tag Support: No 00:09:46.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.908 Storage Tag Check Read Support: No 00:09:46.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Namespace ID:2 00:09:46.908 Error Recovery Timeout: Unlimited 00:09:46.908 Command Set Identifier: NVM (00h) 00:09:46.908 Deallocate: Supported 00:09:46.908 Deallocated/Unwritten Error: Supported 00:09:46.908 Deallocated Read Value: All 0x00 00:09:46.908 Deallocate in Write Zeroes: Not Supported 00:09:46.908 Deallocated Guard Field: 0xFFFF 00:09:46.908 Flush: Supported 00:09:46.908 Reservation: Not Supported 00:09:46.908 Namespace Sharing Capabilities: Private 00:09:46.908 Size (in LBAs): 1048576 (4GiB) 00:09:46.908 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.908 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.908 Thin Provisioning: Not Supported 00:09:46.908 Per-NS Atomic Units: No 00:09:46.908 Maximum Single Source Range Length: 128 00:09:46.908 Maximum Copy Length: 128 00:09:46.908 Maximum Source Range Count: 128 00:09:46.908 NGUID/EUI64 Never Reused: No 00:09:46.908 Namespace Write Protected: No 00:09:46.908 Number of LBA Formats: 8 00:09:46.908 Current LBA Format: LBA Format #04 00:09:46.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.908 00:09:46.908 NVM Specific Namespace Data 00:09:46.908 =========================== 00:09:46.908 Logical Block Storage Tag Mask: 0 00:09:46.908 Protection Information Capabilities: 00:09:46.908 16b Guard Protection Information Storage Tag Support: No 00:09:46.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.908 Storage Tag Check Read Support: No 00:09:46.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Namespace ID:3 00:09:46.908 Error Recovery Timeout: Unlimited 00:09:46.908 Command Set Identifier: NVM (00h) 00:09:46.908 Deallocate: Supported 00:09:46.908 Deallocated/Unwritten Error: Supported 00:09:46.908 Deallocated Read Value: All 0x00 00:09:46.908 Deallocate in Write Zeroes: Not Supported 00:09:46.908 Deallocated Guard Field: 0xFFFF 00:09:46.908 Flush: Supported 00:09:46.908 Reservation: Not Supported 00:09:46.908 Namespace Sharing Capabilities: Private 00:09:46.908 Size (in LBAs): 1048576 (4GiB) 00:09:46.908 Capacity (in LBAs): 1048576 (4GiB) 00:09:46.908 Utilization (in LBAs): 1048576 (4GiB) 00:09:46.908 Thin Provisioning: Not Supported 00:09:46.908 Per-NS Atomic Units: No 00:09:46.908 Maximum Single Source Range Length: 128 00:09:46.908 Maximum Copy Length: 128 00:09:46.908 Maximum Source Range Count: 128 00:09:46.908 NGUID/EUI64 Never Reused: No 00:09:46.908 Namespace Write Protected: No 00:09:46.908 Number of LBA Formats: 8 00:09:46.908 Current LBA Format: LBA Format #04 00:09:46.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:46.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:46.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:46.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:46.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:46.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:46.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:46.908 00:09:46.908 NVM Specific Namespace Data 00:09:46.908 =========================== 00:09:46.908 Logical Block Storage Tag Mask: 0 00:09:46.908 Protection Information Capabilities: 00:09:46.908 16b Guard Protection Information Storage Tag Support: No 00:09:46.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:46.908 Storage Tag Check Read Support: No 00:09:46.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:46.908 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:46.908 11:53:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:47.169 ===================================================== 00:09:47.169 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.169 ===================================================== 00:09:47.169 Controller Capabilities/Features 00:09:47.169 ================================ 00:09:47.169 Vendor ID: 1b36 00:09:47.169 Subsystem Vendor ID: 1af4 00:09:47.169 Serial Number: 12343 00:09:47.169 Model Number: QEMU NVMe Ctrl 00:09:47.169 Firmware Version: 8.0.0 00:09:47.169 Recommended Arb Burst: 6 00:09:47.169 IEEE OUI Identifier: 00 54 52 00:09:47.169 Multi-path I/O 00:09:47.169 May have multiple subsystem ports: No 00:09:47.169 May have multiple controllers: Yes 00:09:47.169 Associated with SR-IOV VF: No 00:09:47.169 Max Data Transfer Size: 524288 00:09:47.169 Max Number of Namespaces: 256 00:09:47.169 Max Number of I/O Queues: 64 00:09:47.169 NVMe Specification Version (VS): 1.4 00:09:47.169 NVMe Specification Version (Identify): 1.4 00:09:47.169 Maximum Queue Entries: 2048 00:09:47.169 Contiguous Queues Required: Yes 00:09:47.169 Arbitration Mechanisms Supported 00:09:47.169 Weighted Round Robin: Not Supported 00:09:47.169 Vendor Specific: Not Supported 00:09:47.169 Reset Timeout: 7500 ms 00:09:47.169 Doorbell Stride: 4 bytes 00:09:47.169 NVM Subsystem Reset: Not Supported 00:09:47.169 Command Sets Supported 00:09:47.169 NVM Command Set: Supported 00:09:47.169 Boot Partition: Not Supported 00:09:47.169 Memory Page Size Minimum: 4096 bytes 00:09:47.169 Memory Page Size Maximum: 65536 bytes 00:09:47.169 Persistent Memory Region: Not Supported 00:09:47.169 Optional Asynchronous Events Supported 00:09:47.169 Namespace Attribute Notices: Supported 00:09:47.169 Firmware Activation Notices: Not Supported 00:09:47.169 ANA Change Notices: Not Supported 00:09:47.169 PLE Aggregate Log Change Notices: Not Supported 00:09:47.169 LBA Status Info Alert Notices: Not Supported 00:09:47.169 EGE Aggregate Log Change Notices: Not Supported 00:09:47.169 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.169 Zone Descriptor Change Notices: Not Supported 00:09:47.169 Discovery Log Change Notices: Not Supported 00:09:47.169 Controller Attributes 00:09:47.169 128-bit Host Identifier: Not Supported 00:09:47.169 Non-Operational Permissive Mode: Not Supported 00:09:47.169 NVM Sets: Not Supported 00:09:47.169 Read Recovery Levels: Not Supported 00:09:47.169 Endurance Groups: Supported 00:09:47.169 Predictable Latency Mode: Not Supported 00:09:47.169 Traffic Based Keep ALive: Not Supported 00:09:47.169 Namespace Granularity: Not Supported 00:09:47.169 SQ Associations: Not Supported 00:09:47.169 UUID List: Not Supported 00:09:47.169 Multi-Domain Subsystem: Not Supported 00:09:47.169 Fixed Capacity Management: Not Supported 00:09:47.169 Variable Capacity Management: Not Supported 00:09:47.169 Delete Endurance Group: Not Supported 00:09:47.169 Delete NVM Set: Not Supported 00:09:47.169 Extended LBA Formats Supported: Supported 00:09:47.169 Flexible Data Placement Supported: Supported 00:09:47.169 00:09:47.169 Controller Memory Buffer Support 00:09:47.169 ================================ 00:09:47.169 Supported: No 00:09:47.169 00:09:47.169 Persistent Memory Region Support 00:09:47.169 ================================ 00:09:47.169 Supported: No 00:09:47.169 00:09:47.169 Admin Command Set Attributes 00:09:47.169 ============================ 00:09:47.169 Security Send/Receive: Not Supported 00:09:47.169 Format NVM: Supported 00:09:47.169 Firmware Activate/Download: Not Supported 00:09:47.169 Namespace Management: Supported 00:09:47.169 Device Self-Test: Not Supported 00:09:47.169 Directives: Supported 00:09:47.169 NVMe-MI: Not Supported 00:09:47.169 Virtualization Management: Not Supported 00:09:47.169 Doorbell Buffer Config: Supported 00:09:47.169 Get LBA Status Capability: Not Supported 00:09:47.169 Command & Feature Lockdown Capability: Not Supported 00:09:47.169 Abort Command Limit: 4 00:09:47.169 Async Event Request Limit: 4 00:09:47.169 Number of Firmware Slots: N/A 00:09:47.169 Firmware Slot 1 Read-Only: N/A 00:09:47.169 Firmware Activation Without Reset: N/A 00:09:47.169 Multiple Update Detection Support: N/A 00:09:47.170 Firmware Update Granularity: No Information Provided 00:09:47.170 Per-Namespace SMART Log: Yes 00:09:47.170 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.170 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:47.170 Command Effects Log Page: Supported 00:09:47.170 Get Log Page Extended Data: Supported 00:09:47.170 Telemetry Log Pages: Not Supported 00:09:47.170 Persistent Event Log Pages: Not Supported 00:09:47.170 Supported Log Pages Log Page: May Support 00:09:47.170 Commands Supported & Effects Log Page: Not Supported 00:09:47.170 Feature Identifiers & Effects Log Page:May Support 00:09:47.170 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.170 Data Area 4 for Telemetry Log: Not Supported 00:09:47.170 Error Log Page Entries Supported: 1 00:09:47.170 Keep Alive: Not Supported 00:09:47.170 00:09:47.170 NVM Command Set Attributes 00:09:47.170 ========================== 00:09:47.170 Submission Queue Entry Size 00:09:47.170 Max: 64 00:09:47.170 Min: 64 00:09:47.170 Completion Queue Entry Size 00:09:47.170 Max: 16 00:09:47.170 Min: 16 00:09:47.170 Number of Namespaces: 256 00:09:47.170 Compare Command: Supported 00:09:47.170 Write Uncorrectable Command: Not Supported 00:09:47.170 Dataset Management Command: Supported 00:09:47.170 Write Zeroes Command: Supported 00:09:47.170 Set Features Save Field: Supported 00:09:47.170 Reservations: Not Supported 00:09:47.170 Timestamp: Supported 00:09:47.170 Copy: Supported 00:09:47.170 Volatile Write Cache: Present 00:09:47.170 Atomic Write Unit (Normal): 1 00:09:47.170 Atomic Write Unit (PFail): 1 00:09:47.170 Atomic Compare & Write Unit: 1 00:09:47.170 Fused Compare & Write: Not Supported 00:09:47.170 Scatter-Gather List 00:09:47.170 SGL Command Set: Supported 00:09:47.170 SGL Keyed: Not Supported 00:09:47.170 SGL Bit Bucket Descriptor: Not Supported 00:09:47.170 SGL Metadata Pointer: Not Supported 00:09:47.170 Oversized SGL: Not Supported 00:09:47.170 SGL Metadata Address: Not Supported 00:09:47.170 SGL Offset: Not Supported 00:09:47.170 Transport SGL Data Block: Not Supported 00:09:47.170 Replay Protected Memory Block: Not Supported 00:09:47.170 00:09:47.170 Firmware Slot Information 00:09:47.170 ========================= 00:09:47.170 Active slot: 1 00:09:47.170 Slot 1 Firmware Revision: 1.0 00:09:47.170 00:09:47.170 00:09:47.170 Commands Supported and Effects 00:09:47.170 ============================== 00:09:47.170 Admin Commands 00:09:47.170 -------------- 00:09:47.170 Delete I/O Submission Queue (00h): Supported 00:09:47.170 Create I/O Submission Queue (01h): Supported 00:09:47.170 Get Log Page (02h): Supported 00:09:47.170 Delete I/O Completion Queue (04h): Supported 00:09:47.170 Create I/O Completion Queue (05h): Supported 00:09:47.170 Identify (06h): Supported 00:09:47.170 Abort (08h): Supported 00:09:47.170 Set Features (09h): Supported 00:09:47.170 Get Features (0Ah): Supported 00:09:47.170 Asynchronous Event Request (0Ch): Supported 00:09:47.170 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.170 Directive Send (19h): Supported 00:09:47.170 Directive Receive (1Ah): Supported 00:09:47.170 Virtualization Management (1Ch): Supported 00:09:47.170 Doorbell Buffer Config (7Ch): Supported 00:09:47.170 Format NVM (80h): Supported LBA-Change 00:09:47.170 I/O Commands 00:09:47.170 ------------ 00:09:47.170 Flush (00h): Supported LBA-Change 00:09:47.170 Write (01h): Supported LBA-Change 00:09:47.170 Read (02h): Supported 00:09:47.170 Compare (05h): Supported 00:09:47.170 Write Zeroes (08h): Supported LBA-Change 00:09:47.170 Dataset Management (09h): Supported LBA-Change 00:09:47.170 Unknown (0Ch): Supported 00:09:47.170 Unknown (12h): Supported 00:09:47.170 Copy (19h): Supported LBA-Change 00:09:47.170 Unknown (1Dh): Supported LBA-Change 00:09:47.170 00:09:47.170 Error Log 00:09:47.170 ========= 00:09:47.170 00:09:47.170 Arbitration 00:09:47.170 =========== 00:09:47.170 Arbitration Burst: no limit 00:09:47.170 00:09:47.170 Power Management 00:09:47.170 ================ 00:09:47.170 Number of Power States: 1 00:09:47.170 Current Power State: Power State #0 00:09:47.170 Power State #0: 00:09:47.170 Max Power: 25.00 W 00:09:47.170 Non-Operational State: Operational 00:09:47.170 Entry Latency: 16 microseconds 00:09:47.170 Exit Latency: 4 microseconds 00:09:47.170 Relative Read Throughput: 0 00:09:47.170 Relative Read Latency: 0 00:09:47.170 Relative Write Throughput: 0 00:09:47.170 Relative Write Latency: 0 00:09:47.170 Idle Power: Not Reported 00:09:47.170 Active Power: Not Reported 00:09:47.170 Non-Operational Permissive Mode: Not Supported 00:09:47.170 00:09:47.170 Health Information 00:09:47.170 ================== 00:09:47.170 Critical Warnings: 00:09:47.170 Available Spare Space: OK 00:09:47.170 Temperature: OK 00:09:47.170 Device Reliability: OK 00:09:47.170 Read Only: No 00:09:47.170 Volatile Memory Backup: OK 00:09:47.170 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.170 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.170 Available Spare: 0% 00:09:47.170 Available Spare Threshold: 0% 00:09:47.170 Life Percentage Used: 0% 00:09:47.170 Data Units Read: 978 00:09:47.170 Data Units Written: 907 00:09:47.170 Host Read Commands: 36099 00:09:47.170 Host Write Commands: 35522 00:09:47.170 Controller Busy Time: 0 minutes 00:09:47.170 Power Cycles: 0 00:09:47.170 Power On Hours: 0 hours 00:09:47.170 Unsafe Shutdowns: 0 00:09:47.170 Unrecoverable Media Errors: 0 00:09:47.170 Lifetime Error Log Entries: 0 00:09:47.170 Warning Temperature Time: 0 minutes 00:09:47.170 Critical Temperature Time: 0 minutes 00:09:47.170 00:09:47.170 Number of Queues 00:09:47.170 ================ 00:09:47.170 Number of I/O Submission Queues: 64 00:09:47.170 Number of I/O Completion Queues: 64 00:09:47.170 00:09:47.170 ZNS Specific Controller Data 00:09:47.170 ============================ 00:09:47.170 Zone Append Size Limit: 0 00:09:47.170 00:09:47.170 00:09:47.170 Active Namespaces 00:09:47.170 ================= 00:09:47.170 Namespace ID:1 00:09:47.170 Error Recovery Timeout: Unlimited 00:09:47.170 Command Set Identifier: NVM (00h) 00:09:47.170 Deallocate: Supported 00:09:47.170 Deallocated/Unwritten Error: Supported 00:09:47.170 Deallocated Read Value: All 0x00 00:09:47.170 Deallocate in Write Zeroes: Not Supported 00:09:47.171 Deallocated Guard Field: 0xFFFF 00:09:47.171 Flush: Supported 00:09:47.171 Reservation: Not Supported 00:09:47.171 Namespace Sharing Capabilities: Multiple Controllers 00:09:47.171 Size (in LBAs): 262144 (1GiB) 00:09:47.171 Capacity (in LBAs): 262144 (1GiB) 00:09:47.171 Utilization (in LBAs): 262144 (1GiB) 00:09:47.171 Thin Provisioning: Not Supported 00:09:47.171 Per-NS Atomic Units: No 00:09:47.171 Maximum Single Source Range Length: 128 00:09:47.171 Maximum Copy Length: 128 00:09:47.171 Maximum Source Range Count: 128 00:09:47.171 NGUID/EUI64 Never Reused: No 00:09:47.171 Namespace Write Protected: No 00:09:47.171 Endurance group ID: 1 00:09:47.171 Number of LBA Formats: 8 00:09:47.171 Current LBA Format: LBA Format #04 00:09:47.171 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.171 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.171 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.171 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.171 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.171 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.171 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.171 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.171 00:09:47.171 Get Feature FDP: 00:09:47.171 ================ 00:09:47.171 Enabled: Yes 00:09:47.171 FDP configuration index: 0 00:09:47.171 00:09:47.171 FDP configurations log page 00:09:47.171 =========================== 00:09:47.171 Number of FDP configurations: 1 00:09:47.171 Version: 0 00:09:47.171 Size: 112 00:09:47.171 FDP Configuration Descriptor: 0 00:09:47.171 Descriptor Size: 96 00:09:47.171 Reclaim Group Identifier format: 2 00:09:47.171 FDP Volatile Write Cache: Not Present 00:09:47.171 FDP Configuration: Valid 00:09:47.171 Vendor Specific Size: 0 00:09:47.171 Number of Reclaim Groups: 2 00:09:47.171 Number of Recalim Unit Handles: 8 00:09:47.171 Max Placement Identifiers: 128 00:09:47.171 Number of Namespaces Suppprted: 256 00:09:47.171 Reclaim unit Nominal Size: 6000000 bytes 00:09:47.171 Estimated Reclaim Unit Time Limit: Not Reported 00:09:47.171 RUH Desc #000: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #001: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #002: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #003: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #004: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #005: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #006: RUH Type: Initially Isolated 00:09:47.171 RUH Desc #007: RUH Type: Initially Isolated 00:09:47.171 00:09:47.171 FDP reclaim unit handle usage log page 00:09:47.431 ====================================== 00:09:47.431 Number of Reclaim Unit Handles: 8 00:09:47.431 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:47.431 RUH Usage Desc #001: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #002: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #003: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #004: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #005: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #006: RUH Attributes: Unused 00:09:47.431 RUH Usage Desc #007: RUH Attributes: Unused 00:09:47.431 00:09:47.431 FDP statistics log page 00:09:47.431 ======================= 00:09:47.431 Host bytes with metadata written: 574070784 00:09:47.431 Media bytes with metadata written: 574148608 00:09:47.431 Media bytes erased: 0 00:09:47.431 00:09:47.431 FDP events log page 00:09:47.431 =================== 00:09:47.431 Number of FDP events: 0 00:09:47.431 00:09:47.431 NVM Specific Namespace Data 00:09:47.431 =========================== 00:09:47.431 Logical Block Storage Tag Mask: 0 00:09:47.431 Protection Information Capabilities: 00:09:47.431 16b Guard Protection Information Storage Tag Support: No 00:09:47.431 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:47.431 Storage Tag Check Read Support: No 00:09:47.431 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:47.431 00:09:47.431 real 0m1.693s 00:09:47.431 user 0m0.667s 00:09:47.431 sys 0m0.834s 00:09:47.431 11:53:37 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.431 ************************************ 00:09:47.431 END TEST nvme_identify 00:09:47.431 11:53:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 11:53:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:47.431 11:53:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.431 11:53:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.431 11:53:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.431 ************************************ 00:09:47.431 START TEST nvme_perf 00:09:47.431 ************************************ 00:09:47.431 11:53:37 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:47.431 11:53:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:48.811 Initializing NVMe Controllers 00:09:48.811 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.811 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:48.811 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.811 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:48.811 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:48.811 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:48.811 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:48.811 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:48.811 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:48.811 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:48.811 Initialization complete. Launching workers. 00:09:48.811 ======================================================== 00:09:48.811 Latency(us) 00:09:48.811 Device Information : IOPS MiB/s Average min max 00:09:48.812 PCIE (0000:00:10.0) NSID 1 from core 0: 14130.89 165.60 9076.59 7694.24 51570.86 00:09:48.812 PCIE (0000:00:11.0) NSID 1 from core 0: 14130.89 165.60 9062.78 7824.63 49592.69 00:09:48.812 PCIE (0000:00:13.0) NSID 1 from core 0: 14130.89 165.60 9046.18 7803.24 47983.38 00:09:48.812 PCIE (0000:00:12.0) NSID 1 from core 0: 14130.89 165.60 9029.98 7805.30 45936.78 00:09:48.812 PCIE (0000:00:12.0) NSID 2 from core 0: 14130.89 165.60 9013.67 7782.79 43964.60 00:09:48.812 PCIE (0000:00:12.0) NSID 3 from core 0: 14194.83 166.35 8956.65 7738.16 37143.73 00:09:48.812 ======================================================== 00:09:48.812 Total : 84849.26 994.33 9030.92 7694.24 51570.86 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 7948.543us 00:09:48.812 10.00000% : 8159.100us 00:09:48.812 25.00000% : 8369.658us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8948.691us 00:09:48.812 90.00000% : 9211.888us 00:09:48.812 95.00000% : 9527.724us 00:09:48.812 98.00000% : 12159.692us 00:09:48.812 99.00000% : 15897.086us 00:09:48.812 99.50000% : 44848.733us 00:09:48.812 99.90000% : 51165.455us 00:09:48.812 99.99000% : 51586.570us 00:09:48.812 99.99900% : 51586.570us 00:09:48.812 99.99990% : 51586.570us 00:09:48.812 99.99999% : 51586.570us 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 8001.182us 00:09:48.812 10.00000% : 8264.379us 00:09:48.812 25.00000% : 8422.297us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8896.051us 00:09:48.812 90.00000% : 9211.888us 00:09:48.812 95.00000% : 9475.084us 00:09:48.812 98.00000% : 12475.528us 00:09:48.812 99.00000% : 16318.201us 00:09:48.812 99.50000% : 43164.273us 00:09:48.812 99.90000% : 49270.439us 00:09:48.812 99.99000% : 49691.553us 00:09:48.812 99.99900% : 49691.553us 00:09:48.812 99.99990% : 49691.553us 00:09:48.812 99.99999% : 49691.553us 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 8001.182us 00:09:48.812 10.00000% : 8264.379us 00:09:48.812 25.00000% : 8422.297us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8896.051us 00:09:48.812 90.00000% : 9159.248us 00:09:48.812 95.00000% : 9475.084us 00:09:48.812 98.00000% : 12528.167us 00:09:48.812 99.00000% : 16107.643us 00:09:48.812 99.50000% : 41690.371us 00:09:48.812 99.90000% : 47796.537us 00:09:48.812 99.99000% : 48007.094us 00:09:48.812 99.99900% : 48007.094us 00:09:48.812 99.99990% : 48007.094us 00:09:48.812 99.99999% : 48007.094us 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 8001.182us 00:09:48.812 10.00000% : 8264.379us 00:09:48.812 25.00000% : 8422.297us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8896.051us 00:09:48.812 90.00000% : 9211.888us 00:09:48.812 95.00000% : 9527.724us 00:09:48.812 98.00000% : 12686.085us 00:09:48.812 99.00000% : 15686.529us 00:09:48.812 99.50000% : 39795.354us 00:09:48.812 99.90000% : 45690.962us 00:09:48.812 99.99000% : 46112.077us 00:09:48.812 99.99900% : 46112.077us 00:09:48.812 99.99990% : 46112.077us 00:09:48.812 99.99999% : 46112.077us 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 8001.182us 00:09:48.812 10.00000% : 8264.379us 00:09:48.812 25.00000% : 8422.297us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8896.051us 00:09:48.812 90.00000% : 9211.888us 00:09:48.812 95.00000% : 9580.363us 00:09:48.812 98.00000% : 12317.610us 00:09:48.812 99.00000% : 15160.135us 00:09:48.812 99.50000% : 37689.780us 00:09:48.812 99.90000% : 43585.388us 00:09:48.812 99.99000% : 44006.503us 00:09:48.812 99.99900% : 44006.503us 00:09:48.812 99.99990% : 44006.503us 00:09:48.812 99.99999% : 44006.503us 00:09:48.812 00:09:48.812 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:48.812 ================================================================================= 00:09:48.812 1.00000% : 8001.182us 00:09:48.812 10.00000% : 8211.740us 00:09:48.812 25.00000% : 8422.297us 00:09:48.812 50.00000% : 8685.494us 00:09:48.812 75.00000% : 8896.051us 00:09:48.812 90.00000% : 9211.888us 00:09:48.812 95.00000% : 9633.002us 00:09:48.812 98.00000% : 12054.413us 00:09:48.812 99.00000% : 15160.135us 00:09:48.812 99.50000% : 30741.385us 00:09:48.812 99.90000% : 36847.550us 00:09:48.812 99.99000% : 37268.665us 00:09:48.812 99.99900% : 37268.665us 00:09:48.812 99.99990% : 37268.665us 00:09:48.812 99.99999% : 37268.665us 00:09:48.812 00:09:48.812 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:48.812 ============================================================================== 00:09:48.812 Range in us Cumulative IO count 00:09:48.812 7685.346 - 7737.986: 0.0283% ( 4) 00:09:48.812 7737.986 - 7790.625: 0.0636% ( 5) 00:09:48.812 7790.625 - 7843.264: 0.2192% ( 22) 00:09:48.812 7843.264 - 7895.904: 0.6292% ( 58) 00:09:48.812 7895.904 - 7948.543: 1.6191% ( 140) 00:09:48.812 7948.543 - 8001.182: 2.8563% ( 175) 00:09:48.812 8001.182 - 8053.822: 4.6663% ( 256) 00:09:48.812 8053.822 - 8106.461: 7.2257% ( 362) 00:09:48.812 8106.461 - 8159.100: 10.1598% ( 415) 00:09:48.812 8159.100 - 8211.740: 13.7019% ( 501) 00:09:48.812 8211.740 - 8264.379: 17.5905% ( 550) 00:09:48.812 8264.379 - 8317.018: 21.6700% ( 577) 00:09:48.812 8317.018 - 8369.658: 26.0322% ( 617) 00:09:48.812 8369.658 - 8422.297: 30.5642% ( 641) 00:09:48.812 8422.297 - 8474.937: 34.9901% ( 626) 00:09:48.812 8474.937 - 8527.576: 39.6564% ( 660) 00:09:48.812 8527.576 - 8580.215: 44.4712% ( 681) 00:09:48.812 8580.215 - 8632.855: 49.2364% ( 674) 00:09:48.812 8632.855 - 8685.494: 54.2067% ( 703) 00:09:48.812 8685.494 - 8738.133: 59.1770% ( 703) 00:09:48.812 8738.133 - 8790.773: 64.0908% ( 695) 00:09:48.812 8790.773 - 8843.412: 68.6793% ( 649) 00:09:48.812 8843.412 - 8896.051: 73.1335% ( 630) 00:09:48.812 8896.051 - 8948.691: 77.2412% ( 581) 00:09:48.812 8948.691 - 9001.330: 80.8894% ( 516) 00:09:48.812 9001.330 - 9053.969: 84.0498% ( 447) 00:09:48.812 9053.969 - 9106.609: 86.7294% ( 379) 00:09:48.812 9106.609 - 9159.248: 88.7019% ( 279) 00:09:48.812 9159.248 - 9211.888: 90.2644% ( 221) 00:09:48.812 9211.888 - 9264.527: 91.4805% ( 172) 00:09:48.812 9264.527 - 9317.166: 92.4986% ( 144) 00:09:48.812 9317.166 - 9369.806: 93.2834% ( 111) 00:09:48.812 9369.806 - 9422.445: 93.9621% ( 96) 00:09:48.812 9422.445 - 9475.084: 94.5206% ( 79) 00:09:48.812 9475.084 - 9527.724: 95.0014% ( 68) 00:09:48.812 9527.724 - 9580.363: 95.3903% ( 55) 00:09:48.812 9580.363 - 9633.002: 95.6236% ( 33) 00:09:48.812 9633.002 - 9685.642: 95.8286% ( 29) 00:09:48.812 9685.642 - 9738.281: 95.9700% ( 20) 00:09:48.812 9738.281 - 9790.920: 96.1185% ( 21) 00:09:48.812 9790.920 - 9843.560: 96.2458% ( 18) 00:09:48.812 9843.560 - 9896.199: 96.3023% ( 8) 00:09:48.812 9896.199 - 9948.839: 96.3942% ( 13) 00:09:48.812 9948.839 - 10001.478: 96.4579% ( 9) 00:09:48.812 10001.478 - 10054.117: 96.5215% ( 9) 00:09:48.812 10054.117 - 10106.757: 96.5568% ( 5) 00:09:48.812 10106.757 - 10159.396: 96.6275% ( 10) 00:09:48.812 10159.396 - 10212.035: 96.6982% ( 10) 00:09:48.812 10212.035 - 10264.675: 96.7477% ( 7) 00:09:48.812 10264.675 - 10317.314: 96.8114% ( 9) 00:09:48.812 10317.314 - 10369.953: 96.8750% ( 9) 00:09:48.812 10369.953 - 10422.593: 96.9457% ( 10) 00:09:48.812 10422.593 - 10475.232: 97.0093% ( 9) 00:09:48.812 10475.232 - 10527.871: 97.0871% ( 11) 00:09:48.812 10527.871 - 10580.511: 97.1366% ( 7) 00:09:48.812 10580.511 - 10633.150: 97.1790% ( 6) 00:09:48.812 10633.150 - 10685.790: 97.2214% ( 6) 00:09:48.812 10685.790 - 10738.429: 97.2497% ( 4) 00:09:48.812 10738.429 - 10791.068: 97.2851% ( 5) 00:09:48.812 10791.068 - 10843.708: 97.3063% ( 3) 00:09:48.812 10843.708 - 10896.347: 97.3487% ( 6) 00:09:48.812 10896.347 - 10948.986: 97.3840% ( 5) 00:09:48.812 10948.986 - 11001.626: 97.4265% ( 6) 00:09:48.812 11001.626 - 11054.265: 97.4689% ( 6) 00:09:48.812 11054.265 - 11106.904: 97.5042% ( 5) 00:09:48.812 11106.904 - 11159.544: 97.5467% ( 6) 00:09:48.812 11159.544 - 11212.183: 97.5820% ( 5) 00:09:48.812 11212.183 - 11264.822: 97.6244% ( 6) 00:09:48.812 11264.822 - 11317.462: 97.6456% ( 3) 00:09:48.812 11317.462 - 11370.101: 97.6598% ( 2) 00:09:48.812 11370.101 - 11422.741: 97.6881% ( 4) 00:09:48.812 11422.741 - 11475.380: 97.7093% ( 3) 00:09:48.812 11475.380 - 11528.019: 97.7305% ( 3) 00:09:48.812 11528.019 - 11580.659: 97.7658% ( 5) 00:09:48.812 11580.659 - 11633.298: 97.7729% ( 1) 00:09:48.812 11633.298 - 11685.937: 97.8012% ( 4) 00:09:48.812 11685.937 - 11738.577: 97.8224% ( 3) 00:09:48.812 11738.577 - 11791.216: 97.8507% ( 4) 00:09:48.812 11791.216 - 11843.855: 97.8719% ( 3) 00:09:48.812 11843.855 - 11896.495: 97.8931% ( 3) 00:09:48.812 11896.495 - 11949.134: 97.9143% ( 3) 00:09:48.813 11949.134 - 12001.773: 97.9355% ( 3) 00:09:48.813 12001.773 - 12054.413: 97.9638% ( 4) 00:09:48.813 12054.413 - 12107.052: 97.9921% ( 4) 00:09:48.813 12107.052 - 12159.692: 98.0133% ( 3) 00:09:48.813 12159.692 - 12212.331: 98.0345% ( 3) 00:09:48.813 12212.331 - 12264.970: 98.0628% ( 4) 00:09:48.813 12264.970 - 12317.610: 98.0769% ( 2) 00:09:48.813 12317.610 - 12370.249: 98.1123% ( 5) 00:09:48.813 12370.249 - 12422.888: 98.1264% ( 2) 00:09:48.813 12422.888 - 12475.528: 98.1335% ( 1) 00:09:48.813 12475.528 - 12528.167: 98.1476% ( 2) 00:09:48.813 12528.167 - 12580.806: 98.1759% ( 4) 00:09:48.813 12580.806 - 12633.446: 98.1971% ( 3) 00:09:48.813 12633.446 - 12686.085: 98.2113% ( 2) 00:09:48.813 12686.085 - 12738.724: 98.2466% ( 5) 00:09:48.813 12738.724 - 12791.364: 98.2749% ( 4) 00:09:48.813 12896.643 - 12949.282: 98.2820% ( 1) 00:09:48.813 12949.282 - 13001.921: 98.2890% ( 1) 00:09:48.813 13001.921 - 13054.561: 98.3032% ( 2) 00:09:48.813 13054.561 - 13107.200: 98.3102% ( 1) 00:09:48.813 13107.200 - 13159.839: 98.3244% ( 2) 00:09:48.813 13159.839 - 13212.479: 98.3314% ( 1) 00:09:48.813 13212.479 - 13265.118: 98.3385% ( 1) 00:09:48.813 13265.118 - 13317.757: 98.3597% ( 3) 00:09:48.813 13317.757 - 13370.397: 98.3668% ( 1) 00:09:48.813 13423.036 - 13475.676: 98.3880% ( 3) 00:09:48.813 13475.676 - 13580.954: 98.4021% ( 2) 00:09:48.813 13580.954 - 13686.233: 98.4304% ( 4) 00:09:48.813 13686.233 - 13791.512: 98.4446% ( 2) 00:09:48.813 13791.512 - 13896.790: 98.4658% ( 3) 00:09:48.813 13896.790 - 14002.069: 98.4870% ( 3) 00:09:48.813 14002.069 - 14107.348: 98.5082% ( 3) 00:09:48.813 14107.348 - 14212.627: 98.5294% ( 3) 00:09:48.813 14212.627 - 14317.905: 98.5436% ( 2) 00:09:48.813 14317.905 - 14423.184: 98.5789% ( 5) 00:09:48.813 14423.184 - 14528.463: 98.6213% ( 6) 00:09:48.813 14528.463 - 14633.741: 98.6779% ( 8) 00:09:48.813 14633.741 - 14739.020: 98.7274% ( 7) 00:09:48.813 14739.020 - 14844.299: 98.7557% ( 4) 00:09:48.813 14844.299 - 14949.578: 98.7839% ( 4) 00:09:48.813 14949.578 - 15054.856: 98.7981% ( 2) 00:09:48.813 15054.856 - 15160.135: 98.8264% ( 4) 00:09:48.813 15160.135 - 15265.414: 98.8546% ( 4) 00:09:48.813 15265.414 - 15370.692: 98.8758% ( 3) 00:09:48.813 15370.692 - 15475.971: 98.8971% ( 3) 00:09:48.813 15475.971 - 15581.250: 98.9253% ( 4) 00:09:48.813 15581.250 - 15686.529: 98.9536% ( 4) 00:09:48.813 15686.529 - 15791.807: 98.9748% ( 3) 00:09:48.813 15791.807 - 15897.086: 99.0031% ( 4) 00:09:48.813 15897.086 - 16002.365: 99.0314% ( 4) 00:09:48.813 16002.365 - 16107.643: 99.0526% ( 3) 00:09:48.813 16107.643 - 16212.922: 99.0809% ( 4) 00:09:48.813 16212.922 - 16318.201: 99.0950% ( 2) 00:09:48.813 43164.273 - 43374.831: 99.1374% ( 6) 00:09:48.813 43374.831 - 43585.388: 99.1869% ( 7) 00:09:48.813 43585.388 - 43795.945: 99.2435% ( 8) 00:09:48.813 43795.945 - 44006.503: 99.2930% ( 7) 00:09:48.813 44006.503 - 44217.060: 99.3495% ( 8) 00:09:48.813 44217.060 - 44427.618: 99.3990% ( 7) 00:09:48.813 44427.618 - 44638.175: 99.4556% ( 8) 00:09:48.813 44638.175 - 44848.733: 99.5122% ( 8) 00:09:48.813 44848.733 - 45059.290: 99.5475% ( 5) 00:09:48.813 49691.553 - 49902.111: 99.5899% ( 6) 00:09:48.813 49902.111 - 50112.668: 99.6394% ( 7) 00:09:48.813 50112.668 - 50323.226: 99.6889% ( 7) 00:09:48.813 50323.226 - 50533.783: 99.7525% ( 9) 00:09:48.813 50533.783 - 50744.341: 99.8020% ( 7) 00:09:48.813 50744.341 - 50954.898: 99.8586% ( 8) 00:09:48.813 50954.898 - 51165.455: 99.9152% ( 8) 00:09:48.813 51165.455 - 51376.013: 99.9646% ( 7) 00:09:48.813 51376.013 - 51586.570: 100.0000% ( 5) 00:09:48.813 00:09:48.813 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:48.813 ============================================================================== 00:09:48.813 Range in us Cumulative IO count 00:09:48.813 7790.625 - 7843.264: 0.0212% ( 3) 00:09:48.813 7843.264 - 7895.904: 0.1414% ( 17) 00:09:48.813 7895.904 - 7948.543: 0.4454% ( 43) 00:09:48.813 7948.543 - 8001.182: 1.1736% ( 103) 00:09:48.813 8001.182 - 8053.822: 2.3261% ( 163) 00:09:48.813 8053.822 - 8106.461: 4.1007% ( 251) 00:09:48.813 8106.461 - 8159.100: 6.6742% ( 364) 00:09:48.813 8159.100 - 8211.740: 9.9689% ( 466) 00:09:48.813 8211.740 - 8264.379: 13.9211% ( 559) 00:09:48.813 8264.379 - 8317.018: 18.2834% ( 617) 00:09:48.813 8317.018 - 8369.658: 23.0911% ( 680) 00:09:48.813 8369.658 - 8422.297: 28.0896% ( 707) 00:09:48.813 8422.297 - 8474.937: 33.4135% ( 753) 00:09:48.813 8474.937 - 8527.576: 38.6100% ( 735) 00:09:48.813 8527.576 - 8580.215: 44.0894% ( 775) 00:09:48.813 8580.215 - 8632.855: 49.7313% ( 798) 00:09:48.813 8632.855 - 8685.494: 55.2814% ( 785) 00:09:48.813 8685.494 - 8738.133: 60.9234% ( 798) 00:09:48.813 8738.133 - 8790.773: 66.3674% ( 770) 00:09:48.813 8790.773 - 8843.412: 71.4791% ( 723) 00:09:48.813 8843.412 - 8896.051: 76.1666% ( 663) 00:09:48.813 8896.051 - 8948.691: 80.1683% ( 566) 00:09:48.813 8948.691 - 9001.330: 83.4771% ( 468) 00:09:48.813 9001.330 - 9053.969: 86.1143% ( 373) 00:09:48.813 9053.969 - 9106.609: 88.2282% ( 299) 00:09:48.813 9106.609 - 9159.248: 89.9109% ( 238) 00:09:48.813 9159.248 - 9211.888: 91.2613% ( 191) 00:09:48.813 9211.888 - 9264.527: 92.3077% ( 148) 00:09:48.813 9264.527 - 9317.166: 93.1844% ( 124) 00:09:48.813 9317.166 - 9369.806: 93.9480% ( 108) 00:09:48.813 9369.806 - 9422.445: 94.5701% ( 88) 00:09:48.813 9422.445 - 9475.084: 95.0650% ( 70) 00:09:48.813 9475.084 - 9527.724: 95.4468% ( 54) 00:09:48.813 9527.724 - 9580.363: 95.7226% ( 39) 00:09:48.813 9580.363 - 9633.002: 95.9064% ( 26) 00:09:48.813 9633.002 - 9685.642: 96.0690% ( 23) 00:09:48.813 9685.642 - 9738.281: 96.1892% ( 17) 00:09:48.813 9738.281 - 9790.920: 96.3023% ( 16) 00:09:48.813 9790.920 - 9843.560: 96.3801% ( 11) 00:09:48.813 9843.560 - 9896.199: 96.5003% ( 17) 00:09:48.813 9896.199 - 9948.839: 96.6063% ( 15) 00:09:48.813 9948.839 - 10001.478: 96.6841% ( 11) 00:09:48.813 10001.478 - 10054.117: 96.7407% ( 8) 00:09:48.813 10054.117 - 10106.757: 96.7831% ( 6) 00:09:48.813 10106.757 - 10159.396: 96.8538% ( 10) 00:09:48.813 10159.396 - 10212.035: 96.8962% ( 6) 00:09:48.813 10212.035 - 10264.675: 96.9386% ( 6) 00:09:48.813 10264.675 - 10317.314: 96.9952% ( 8) 00:09:48.813 10317.314 - 10369.953: 97.0164% ( 3) 00:09:48.813 10369.953 - 10422.593: 97.0447% ( 4) 00:09:48.813 10422.593 - 10475.232: 97.0730% ( 4) 00:09:48.813 10475.232 - 10527.871: 97.1012% ( 4) 00:09:48.813 10527.871 - 10580.511: 97.1366% ( 5) 00:09:48.813 10580.511 - 10633.150: 97.1649% ( 4) 00:09:48.813 10633.150 - 10685.790: 97.2002% ( 5) 00:09:48.813 10685.790 - 10738.429: 97.2214% ( 3) 00:09:48.813 10738.429 - 10791.068: 97.2497% ( 4) 00:09:48.813 10791.068 - 10843.708: 97.2851% ( 5) 00:09:48.813 10843.708 - 10896.347: 97.3133% ( 4) 00:09:48.813 10896.347 - 10948.986: 97.3487% ( 5) 00:09:48.813 10948.986 - 11001.626: 97.3699% ( 3) 00:09:48.813 11001.626 - 11054.265: 97.4053% ( 5) 00:09:48.813 11054.265 - 11106.904: 97.4335% ( 4) 00:09:48.813 11106.904 - 11159.544: 97.4548% ( 3) 00:09:48.813 11159.544 - 11212.183: 97.4901% ( 5) 00:09:48.813 11212.183 - 11264.822: 97.5184% ( 4) 00:09:48.813 11264.822 - 11317.462: 97.5467% ( 4) 00:09:48.813 11317.462 - 11370.101: 97.5749% ( 4) 00:09:48.813 11370.101 - 11422.741: 97.6032% ( 4) 00:09:48.813 11422.741 - 11475.380: 97.6386% ( 5) 00:09:48.813 11475.380 - 11528.019: 97.6669% ( 4) 00:09:48.813 11528.019 - 11580.659: 97.7022% ( 5) 00:09:48.813 11580.659 - 11633.298: 97.7234% ( 3) 00:09:48.813 11633.298 - 11685.937: 97.7305% ( 1) 00:09:48.813 11685.937 - 11738.577: 97.7446% ( 2) 00:09:48.813 11738.577 - 11791.216: 97.7517% ( 1) 00:09:48.813 11791.216 - 11843.855: 97.7729% ( 3) 00:09:48.813 11843.855 - 11896.495: 97.7870% ( 2) 00:09:48.813 11896.495 - 11949.134: 97.8012% ( 2) 00:09:48.813 11949.134 - 12001.773: 97.8153% ( 2) 00:09:48.813 12001.773 - 12054.413: 97.8295% ( 2) 00:09:48.813 12054.413 - 12107.052: 97.8436% ( 2) 00:09:48.813 12107.052 - 12159.692: 97.8648% ( 3) 00:09:48.813 12159.692 - 12212.331: 97.8860% ( 3) 00:09:48.813 12212.331 - 12264.970: 97.9143% ( 4) 00:09:48.813 12264.970 - 12317.610: 97.9426% ( 4) 00:09:48.813 12317.610 - 12370.249: 97.9709% ( 4) 00:09:48.813 12370.249 - 12422.888: 97.9992% ( 4) 00:09:48.813 12422.888 - 12475.528: 98.0274% ( 4) 00:09:48.813 12475.528 - 12528.167: 98.0557% ( 4) 00:09:48.813 12528.167 - 12580.806: 98.0911% ( 5) 00:09:48.813 12580.806 - 12633.446: 98.1123% ( 3) 00:09:48.813 12633.446 - 12686.085: 98.1406% ( 4) 00:09:48.813 12686.085 - 12738.724: 98.1547% ( 2) 00:09:48.813 12738.724 - 12791.364: 98.1830% ( 4) 00:09:48.813 12791.364 - 12844.003: 98.2183% ( 5) 00:09:48.813 12844.003 - 12896.643: 98.2395% ( 3) 00:09:48.813 12896.643 - 12949.282: 98.2607% ( 3) 00:09:48.813 12949.282 - 13001.921: 98.2890% ( 4) 00:09:48.813 13001.921 - 13054.561: 98.3244% ( 5) 00:09:48.813 13054.561 - 13107.200: 98.3527% ( 4) 00:09:48.813 13107.200 - 13159.839: 98.3739% ( 3) 00:09:48.813 13159.839 - 13212.479: 98.4021% ( 4) 00:09:48.813 13212.479 - 13265.118: 98.4304% ( 4) 00:09:48.813 13265.118 - 13317.757: 98.4587% ( 4) 00:09:48.813 13317.757 - 13370.397: 98.4729% ( 2) 00:09:48.813 13370.397 - 13423.036: 98.4870% ( 2) 00:09:48.813 13423.036 - 13475.676: 98.4941% ( 1) 00:09:48.813 13475.676 - 13580.954: 98.5223% ( 4) 00:09:48.813 13580.954 - 13686.233: 98.5436% ( 3) 00:09:48.813 13686.233 - 13791.512: 98.5718% ( 4) 00:09:48.814 13791.512 - 13896.790: 98.5930% ( 3) 00:09:48.814 13896.790 - 14002.069: 98.6213% ( 4) 00:09:48.814 14002.069 - 14107.348: 98.6425% ( 3) 00:09:48.814 14317.905 - 14423.184: 98.6637% ( 3) 00:09:48.814 14423.184 - 14528.463: 98.6779% ( 2) 00:09:48.814 14528.463 - 14633.741: 98.6991% ( 3) 00:09:48.814 14633.741 - 14739.020: 98.7203% ( 3) 00:09:48.814 14739.020 - 14844.299: 98.7415% ( 3) 00:09:48.814 14844.299 - 14949.578: 98.7627% ( 3) 00:09:48.814 14949.578 - 15054.856: 98.7839% ( 3) 00:09:48.814 15054.856 - 15160.135: 98.8051% ( 3) 00:09:48.814 15160.135 - 15265.414: 98.8193% ( 2) 00:09:48.814 15265.414 - 15370.692: 98.8405% ( 3) 00:09:48.814 15370.692 - 15475.971: 98.8617% ( 3) 00:09:48.814 15475.971 - 15581.250: 98.8829% ( 3) 00:09:48.814 15581.250 - 15686.529: 98.9041% ( 3) 00:09:48.814 15686.529 - 15791.807: 98.9183% ( 2) 00:09:48.814 15791.807 - 15897.086: 98.9324% ( 2) 00:09:48.814 15897.086 - 16002.365: 98.9536% ( 3) 00:09:48.814 16002.365 - 16107.643: 98.9748% ( 3) 00:09:48.814 16107.643 - 16212.922: 98.9960% ( 3) 00:09:48.814 16212.922 - 16318.201: 99.0173% ( 3) 00:09:48.814 16318.201 - 16423.480: 99.0314% ( 2) 00:09:48.814 16423.480 - 16528.758: 99.0526% ( 3) 00:09:48.814 16528.758 - 16634.037: 99.0738% ( 3) 00:09:48.814 16634.037 - 16739.316: 99.0880% ( 2) 00:09:48.814 16739.316 - 16844.594: 99.0950% ( 1) 00:09:48.814 41269.256 - 41479.814: 99.1021% ( 1) 00:09:48.814 41479.814 - 41690.371: 99.1516% ( 7) 00:09:48.814 41690.371 - 41900.929: 99.2081% ( 8) 00:09:48.814 41900.929 - 42111.486: 99.2435% ( 5) 00:09:48.814 42111.486 - 42322.043: 99.3001% ( 8) 00:09:48.814 42322.043 - 42532.601: 99.3495% ( 7) 00:09:48.814 42532.601 - 42743.158: 99.3990% ( 7) 00:09:48.814 42743.158 - 42953.716: 99.4485% ( 7) 00:09:48.814 42953.716 - 43164.273: 99.5051% ( 8) 00:09:48.814 43164.273 - 43374.831: 99.5475% ( 6) 00:09:48.814 47796.537 - 48007.094: 99.5899% ( 6) 00:09:48.814 48007.094 - 48217.651: 99.6394% ( 7) 00:09:48.814 48217.651 - 48428.209: 99.6889% ( 7) 00:09:48.814 48428.209 - 48638.766: 99.7455% ( 8) 00:09:48.814 48638.766 - 48849.324: 99.8020% ( 8) 00:09:48.814 48849.324 - 49059.881: 99.8515% ( 7) 00:09:48.814 49059.881 - 49270.439: 99.9081% ( 8) 00:09:48.814 49270.439 - 49480.996: 99.9646% ( 8) 00:09:48.814 49480.996 - 49691.553: 100.0000% ( 5) 00:09:48.814 00:09:48.814 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:48.814 ============================================================================== 00:09:48.814 Range in us Cumulative IO count 00:09:48.814 7790.625 - 7843.264: 0.0495% ( 7) 00:09:48.814 7843.264 - 7895.904: 0.1697% ( 17) 00:09:48.814 7895.904 - 7948.543: 0.4383% ( 38) 00:09:48.814 7948.543 - 8001.182: 1.1736% ( 104) 00:09:48.814 8001.182 - 8053.822: 2.2907% ( 158) 00:09:48.814 8053.822 - 8106.461: 4.1360% ( 261) 00:09:48.814 8106.461 - 8159.100: 6.4904% ( 333) 00:09:48.814 8159.100 - 8211.740: 9.7568% ( 462) 00:09:48.814 8211.740 - 8264.379: 13.6524% ( 551) 00:09:48.814 8264.379 - 8317.018: 18.1702% ( 639) 00:09:48.814 8317.018 - 8369.658: 22.8577% ( 663) 00:09:48.814 8369.658 - 8422.297: 27.9412% ( 719) 00:09:48.814 8422.297 - 8474.937: 33.1519% ( 737) 00:09:48.814 8474.937 - 8527.576: 38.5110% ( 758) 00:09:48.814 8527.576 - 8580.215: 44.1176% ( 793) 00:09:48.814 8580.215 - 8632.855: 49.8445% ( 810) 00:09:48.814 8632.855 - 8685.494: 55.5571% ( 808) 00:09:48.814 8685.494 - 8738.133: 61.1779% ( 795) 00:09:48.814 8738.133 - 8790.773: 66.7845% ( 793) 00:09:48.814 8790.773 - 8843.412: 71.8538% ( 717) 00:09:48.814 8843.412 - 8896.051: 76.5201% ( 660) 00:09:48.814 8896.051 - 8948.691: 80.5288% ( 567) 00:09:48.814 8948.691 - 9001.330: 83.9437% ( 483) 00:09:48.814 9001.330 - 9053.969: 86.5314% ( 366) 00:09:48.814 9053.969 - 9106.609: 88.4545% ( 272) 00:09:48.814 9106.609 - 9159.248: 90.0028% ( 219) 00:09:48.814 9159.248 - 9211.888: 91.3532% ( 191) 00:09:48.814 9211.888 - 9264.527: 92.4420% ( 154) 00:09:48.814 9264.527 - 9317.166: 93.3258% ( 125) 00:09:48.814 9317.166 - 9369.806: 94.1035% ( 110) 00:09:48.814 9369.806 - 9422.445: 94.6691% ( 80) 00:09:48.814 9422.445 - 9475.084: 95.0509% ( 54) 00:09:48.814 9475.084 - 9527.724: 95.3620% ( 44) 00:09:48.814 9527.724 - 9580.363: 95.6307% ( 38) 00:09:48.814 9580.363 - 9633.002: 95.8357% ( 29) 00:09:48.814 9633.002 - 9685.642: 96.0195% ( 26) 00:09:48.814 9685.642 - 9738.281: 96.1821% ( 23) 00:09:48.814 9738.281 - 9790.920: 96.3377% ( 22) 00:09:48.814 9790.920 - 9843.560: 96.4861% ( 21) 00:09:48.814 9843.560 - 9896.199: 96.5851% ( 14) 00:09:48.814 9896.199 - 9948.839: 96.7053% ( 17) 00:09:48.814 9948.839 - 10001.478: 96.7689% ( 9) 00:09:48.814 10001.478 - 10054.117: 96.8114% ( 6) 00:09:48.814 10054.117 - 10106.757: 96.8538% ( 6) 00:09:48.814 10106.757 - 10159.396: 96.9033% ( 7) 00:09:48.814 10159.396 - 10212.035: 96.9528% ( 7) 00:09:48.814 10212.035 - 10264.675: 96.9952% ( 6) 00:09:48.814 10264.675 - 10317.314: 97.0447% ( 7) 00:09:48.814 10317.314 - 10369.953: 97.0730% ( 4) 00:09:48.814 10369.953 - 10422.593: 97.1154% ( 6) 00:09:48.814 10422.593 - 10475.232: 97.1578% ( 6) 00:09:48.814 10475.232 - 10527.871: 97.1861% ( 4) 00:09:48.814 10527.871 - 10580.511: 97.2285% ( 6) 00:09:48.814 10580.511 - 10633.150: 97.2639% ( 5) 00:09:48.814 10633.150 - 10685.790: 97.2992% ( 5) 00:09:48.814 10685.790 - 10738.429: 97.3346% ( 5) 00:09:48.814 10738.429 - 10791.068: 97.3628% ( 4) 00:09:48.814 10791.068 - 10843.708: 97.3982% ( 5) 00:09:48.814 10843.708 - 10896.347: 97.4335% ( 5) 00:09:48.814 10896.347 - 10948.986: 97.4548% ( 3) 00:09:48.814 10948.986 - 11001.626: 97.4901% ( 5) 00:09:48.814 11001.626 - 11054.265: 97.5184% ( 4) 00:09:48.814 11054.265 - 11106.904: 97.5537% ( 5) 00:09:48.814 11106.904 - 11159.544: 97.5820% ( 4) 00:09:48.814 11159.544 - 11212.183: 97.6103% ( 4) 00:09:48.814 11212.183 - 11264.822: 97.6456% ( 5) 00:09:48.814 11264.822 - 11317.462: 97.6810% ( 5) 00:09:48.814 11317.462 - 11370.101: 97.7022% ( 3) 00:09:48.814 11370.101 - 11422.741: 97.7234% ( 3) 00:09:48.814 11422.741 - 11475.380: 97.7376% ( 2) 00:09:48.814 11949.134 - 12001.773: 97.7446% ( 1) 00:09:48.814 12001.773 - 12054.413: 97.7658% ( 3) 00:09:48.814 12054.413 - 12107.052: 97.7870% ( 3) 00:09:48.814 12107.052 - 12159.692: 97.8153% ( 4) 00:09:48.814 12159.692 - 12212.331: 97.8436% ( 4) 00:09:48.814 12212.331 - 12264.970: 97.8719% ( 4) 00:09:48.814 12264.970 - 12317.610: 97.8931% ( 3) 00:09:48.814 12317.610 - 12370.249: 97.9285% ( 5) 00:09:48.814 12370.249 - 12422.888: 97.9567% ( 4) 00:09:48.814 12422.888 - 12475.528: 97.9779% ( 3) 00:09:48.814 12475.528 - 12528.167: 98.0062% ( 4) 00:09:48.814 12528.167 - 12580.806: 98.0274% ( 3) 00:09:48.814 12580.806 - 12633.446: 98.0557% ( 4) 00:09:48.814 12633.446 - 12686.085: 98.0769% ( 3) 00:09:48.814 12686.085 - 12738.724: 98.1052% ( 4) 00:09:48.814 12738.724 - 12791.364: 98.1335% ( 4) 00:09:48.814 12791.364 - 12844.003: 98.1618% ( 4) 00:09:48.814 12844.003 - 12896.643: 98.1830% ( 3) 00:09:48.814 12896.643 - 12949.282: 98.2183% ( 5) 00:09:48.814 12949.282 - 13001.921: 98.2466% ( 4) 00:09:48.814 13001.921 - 13054.561: 98.2749% ( 4) 00:09:48.814 13054.561 - 13107.200: 98.3032% ( 4) 00:09:48.814 13107.200 - 13159.839: 98.3314% ( 4) 00:09:48.814 13159.839 - 13212.479: 98.3597% ( 4) 00:09:48.814 13212.479 - 13265.118: 98.3809% ( 3) 00:09:48.814 13265.118 - 13317.757: 98.4092% ( 4) 00:09:48.814 13317.757 - 13370.397: 98.4375% ( 4) 00:09:48.814 13370.397 - 13423.036: 98.4658% ( 4) 00:09:48.814 13423.036 - 13475.676: 98.4870% ( 3) 00:09:48.814 13475.676 - 13580.954: 98.5365% ( 7) 00:09:48.814 13580.954 - 13686.233: 98.5648% ( 4) 00:09:48.814 13686.233 - 13791.512: 98.5860% ( 3) 00:09:48.814 13791.512 - 13896.790: 98.6143% ( 4) 00:09:48.814 13896.790 - 14002.069: 98.6355% ( 3) 00:09:48.814 14002.069 - 14107.348: 98.6425% ( 1) 00:09:48.814 14317.905 - 14423.184: 98.6496% ( 1) 00:09:48.814 14423.184 - 14528.463: 98.6708% ( 3) 00:09:48.814 14528.463 - 14633.741: 98.6920% ( 3) 00:09:48.814 14633.741 - 14739.020: 98.7132% ( 3) 00:09:48.814 14739.020 - 14844.299: 98.7344% ( 3) 00:09:48.814 14844.299 - 14949.578: 98.7486% ( 2) 00:09:48.814 14949.578 - 15054.856: 98.7698% ( 3) 00:09:48.814 15054.856 - 15160.135: 98.7910% ( 3) 00:09:48.814 15160.135 - 15265.414: 98.8122% ( 3) 00:09:48.814 15265.414 - 15370.692: 98.8334% ( 3) 00:09:48.814 15370.692 - 15475.971: 98.8617% ( 4) 00:09:48.814 15475.971 - 15581.250: 98.8900% ( 4) 00:09:48.814 15581.250 - 15686.529: 98.9112% ( 3) 00:09:48.814 15686.529 - 15791.807: 98.9395% ( 4) 00:09:48.814 15791.807 - 15897.086: 98.9607% ( 3) 00:09:48.814 15897.086 - 16002.365: 98.9819% ( 3) 00:09:48.814 16002.365 - 16107.643: 99.0031% ( 3) 00:09:48.814 16107.643 - 16212.922: 99.0314% ( 4) 00:09:48.814 16212.922 - 16318.201: 99.0597% ( 4) 00:09:48.814 16318.201 - 16423.480: 99.0809% ( 3) 00:09:48.814 16423.480 - 16528.758: 99.0950% ( 2) 00:09:48.814 40005.912 - 40216.469: 99.1516% ( 8) 00:09:48.814 40216.469 - 40427.027: 99.2011% ( 7) 00:09:48.814 40427.027 - 40637.584: 99.2506% ( 7) 00:09:48.814 40637.584 - 40848.141: 99.3001% ( 7) 00:09:48.814 40848.141 - 41058.699: 99.3566% ( 8) 00:09:48.814 41058.699 - 41269.256: 99.4061% ( 7) 00:09:48.814 41269.256 - 41479.814: 99.4627% ( 8) 00:09:48.814 41479.814 - 41690.371: 99.5122% ( 7) 00:09:48.814 41690.371 - 41900.929: 99.5475% ( 5) 00:09:48.814 46112.077 - 46322.635: 99.5758% ( 4) 00:09:48.814 46322.635 - 46533.192: 99.6324% ( 8) 00:09:48.814 46533.192 - 46743.749: 99.6818% ( 7) 00:09:48.814 46743.749 - 46954.307: 99.7384% ( 8) 00:09:48.814 46954.307 - 47164.864: 99.7879% ( 7) 00:09:48.814 47164.864 - 47375.422: 99.8374% ( 7) 00:09:48.815 47375.422 - 47585.979: 99.8939% ( 8) 00:09:48.815 47585.979 - 47796.537: 99.9505% ( 8) 00:09:48.815 47796.537 - 48007.094: 100.0000% ( 7) 00:09:48.815 00:09:48.815 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:48.815 ============================================================================== 00:09:48.815 Range in us Cumulative IO count 00:09:48.815 7790.625 - 7843.264: 0.0424% ( 6) 00:09:48.815 7843.264 - 7895.904: 0.1414% ( 14) 00:09:48.815 7895.904 - 7948.543: 0.3889% ( 35) 00:09:48.815 7948.543 - 8001.182: 1.0959% ( 100) 00:09:48.815 8001.182 - 8053.822: 2.2059% ( 157) 00:09:48.815 8053.822 - 8106.461: 3.9734% ( 250) 00:09:48.815 8106.461 - 8159.100: 6.5894% ( 370) 00:09:48.815 8159.100 - 8211.740: 9.8063% ( 455) 00:09:48.815 8211.740 - 8264.379: 13.9140% ( 581) 00:09:48.815 8264.379 - 8317.018: 18.2127% ( 608) 00:09:48.815 8317.018 - 8369.658: 22.9921% ( 676) 00:09:48.815 8369.658 - 8422.297: 27.9695% ( 704) 00:09:48.815 8422.297 - 8474.937: 33.0387% ( 717) 00:09:48.815 8474.937 - 8527.576: 38.5040% ( 773) 00:09:48.815 8527.576 - 8580.215: 44.0045% ( 778) 00:09:48.815 8580.215 - 8632.855: 49.6677% ( 801) 00:09:48.815 8632.855 - 8685.494: 55.3733% ( 807) 00:09:48.815 8685.494 - 8738.133: 61.1355% ( 815) 00:09:48.815 8738.133 - 8790.773: 66.6219% ( 776) 00:09:48.815 8790.773 - 8843.412: 71.7477% ( 725) 00:09:48.815 8843.412 - 8896.051: 76.3009% ( 644) 00:09:48.815 8896.051 - 8948.691: 80.3026% ( 566) 00:09:48.815 8948.691 - 9001.330: 83.5054% ( 453) 00:09:48.815 9001.330 - 9053.969: 86.1001% ( 367) 00:09:48.815 9053.969 - 9106.609: 88.1717% ( 293) 00:09:48.815 9106.609 - 9159.248: 89.8614% ( 239) 00:09:48.815 9159.248 - 9211.888: 91.2684% ( 199) 00:09:48.815 9211.888 - 9264.527: 92.4137% ( 162) 00:09:48.815 9264.527 - 9317.166: 93.4177% ( 142) 00:09:48.815 9317.166 - 9369.806: 94.0823% ( 94) 00:09:48.815 9369.806 - 9422.445: 94.5631% ( 68) 00:09:48.815 9422.445 - 9475.084: 94.9378% ( 53) 00:09:48.815 9475.084 - 9527.724: 95.2135% ( 39) 00:09:48.815 9527.724 - 9580.363: 95.4468% ( 33) 00:09:48.815 9580.363 - 9633.002: 95.6024% ( 22) 00:09:48.815 9633.002 - 9685.642: 95.7579% ( 22) 00:09:48.815 9685.642 - 9738.281: 95.9135% ( 22) 00:09:48.815 9738.281 - 9790.920: 96.1044% ( 27) 00:09:48.815 9790.920 - 9843.560: 96.2882% ( 26) 00:09:48.815 9843.560 - 9896.199: 96.4437% ( 22) 00:09:48.815 9896.199 - 9948.839: 96.5427% ( 14) 00:09:48.815 9948.839 - 10001.478: 96.6275% ( 12) 00:09:48.815 10001.478 - 10054.117: 96.7053% ( 11) 00:09:48.815 10054.117 - 10106.757: 96.7619% ( 8) 00:09:48.815 10106.757 - 10159.396: 96.8326% ( 10) 00:09:48.815 10159.396 - 10212.035: 96.8821% ( 7) 00:09:48.815 10212.035 - 10264.675: 96.9386% ( 8) 00:09:48.815 10264.675 - 10317.314: 96.9740% ( 5) 00:09:48.815 10317.314 - 10369.953: 97.0093% ( 5) 00:09:48.815 10369.953 - 10422.593: 97.0447% ( 5) 00:09:48.815 10422.593 - 10475.232: 97.0942% ( 7) 00:09:48.815 10475.232 - 10527.871: 97.1437% ( 7) 00:09:48.815 10527.871 - 10580.511: 97.1932% ( 7) 00:09:48.815 10580.511 - 10633.150: 97.2285% ( 5) 00:09:48.815 10633.150 - 10685.790: 97.2709% ( 6) 00:09:48.815 10685.790 - 10738.429: 97.3133% ( 6) 00:09:48.815 10738.429 - 10791.068: 97.3416% ( 4) 00:09:48.815 10791.068 - 10843.708: 97.3840% ( 6) 00:09:48.815 10843.708 - 10896.347: 97.4335% ( 7) 00:09:48.815 10896.347 - 10948.986: 97.4689% ( 5) 00:09:48.815 10948.986 - 11001.626: 97.5042% ( 5) 00:09:48.815 11001.626 - 11054.265: 97.5325% ( 4) 00:09:48.815 11054.265 - 11106.904: 97.5467% ( 2) 00:09:48.815 11106.904 - 11159.544: 97.5608% ( 2) 00:09:48.815 11159.544 - 11212.183: 97.5820% ( 3) 00:09:48.815 11212.183 - 11264.822: 97.6032% ( 3) 00:09:48.815 11264.822 - 11317.462: 97.6244% ( 3) 00:09:48.815 11317.462 - 11370.101: 97.6386% ( 2) 00:09:48.815 11370.101 - 11422.741: 97.6527% ( 2) 00:09:48.815 11422.741 - 11475.380: 97.6739% ( 3) 00:09:48.815 11475.380 - 11528.019: 97.6951% ( 3) 00:09:48.815 11528.019 - 11580.659: 97.7093% ( 2) 00:09:48.815 11580.659 - 11633.298: 97.7305% ( 3) 00:09:48.815 11633.298 - 11685.937: 97.7446% ( 2) 00:09:48.815 11685.937 - 11738.577: 97.7588% ( 2) 00:09:48.815 11738.577 - 11791.216: 97.7800% ( 3) 00:09:48.815 11791.216 - 11843.855: 97.7870% ( 1) 00:09:48.815 11843.855 - 11896.495: 97.8012% ( 2) 00:09:48.815 11896.495 - 11949.134: 97.8153% ( 2) 00:09:48.815 11949.134 - 12001.773: 97.8224% ( 1) 00:09:48.815 12001.773 - 12054.413: 97.8365% ( 2) 00:09:48.815 12054.413 - 12107.052: 97.8507% ( 2) 00:09:48.815 12107.052 - 12159.692: 97.8648% ( 2) 00:09:48.815 12159.692 - 12212.331: 97.8719% ( 1) 00:09:48.815 12212.331 - 12264.970: 97.8860% ( 2) 00:09:48.815 12264.970 - 12317.610: 97.9002% ( 2) 00:09:48.815 12317.610 - 12370.249: 97.9143% ( 2) 00:09:48.815 12370.249 - 12422.888: 97.9285% ( 2) 00:09:48.815 12422.888 - 12475.528: 97.9355% ( 1) 00:09:48.815 12475.528 - 12528.167: 97.9497% ( 2) 00:09:48.815 12528.167 - 12580.806: 97.9638% ( 2) 00:09:48.815 12580.806 - 12633.446: 97.9921% ( 4) 00:09:48.815 12633.446 - 12686.085: 98.0204% ( 4) 00:09:48.815 12686.085 - 12738.724: 98.0486% ( 4) 00:09:48.815 12738.724 - 12791.364: 98.0769% ( 4) 00:09:48.815 12791.364 - 12844.003: 98.0981% ( 3) 00:09:48.815 12844.003 - 12896.643: 98.1264% ( 4) 00:09:48.815 12896.643 - 12949.282: 98.1476% ( 3) 00:09:48.815 12949.282 - 13001.921: 98.1830% ( 5) 00:09:48.815 13001.921 - 13054.561: 98.2042% ( 3) 00:09:48.815 13054.561 - 13107.200: 98.2325% ( 4) 00:09:48.815 13107.200 - 13159.839: 98.2607% ( 4) 00:09:48.815 13159.839 - 13212.479: 98.2820% ( 3) 00:09:48.815 13212.479 - 13265.118: 98.3102% ( 4) 00:09:48.815 13265.118 - 13317.757: 98.3314% ( 3) 00:09:48.815 13317.757 - 13370.397: 98.3597% ( 4) 00:09:48.815 13370.397 - 13423.036: 98.3880% ( 4) 00:09:48.815 13423.036 - 13475.676: 98.4163% ( 4) 00:09:48.815 13475.676 - 13580.954: 98.4729% ( 8) 00:09:48.815 13580.954 - 13686.233: 98.5011% ( 4) 00:09:48.815 13686.233 - 13791.512: 98.5365% ( 5) 00:09:48.815 13791.512 - 13896.790: 98.5648% ( 4) 00:09:48.815 13896.790 - 14002.069: 98.6001% ( 5) 00:09:48.815 14002.069 - 14107.348: 98.6284% ( 4) 00:09:48.815 14107.348 - 14212.627: 98.6779% ( 7) 00:09:48.815 14212.627 - 14317.905: 98.6991% ( 3) 00:09:48.815 14317.905 - 14423.184: 98.7132% ( 2) 00:09:48.815 14423.184 - 14528.463: 98.7415% ( 4) 00:09:48.815 14528.463 - 14633.741: 98.7698% ( 4) 00:09:48.815 14633.741 - 14739.020: 98.7910% ( 3) 00:09:48.815 14739.020 - 14844.299: 98.8193% ( 4) 00:09:48.815 14844.299 - 14949.578: 98.8405% ( 3) 00:09:48.815 14949.578 - 15054.856: 98.8688% ( 4) 00:09:48.815 15054.856 - 15160.135: 98.8971% ( 4) 00:09:48.815 15160.135 - 15265.414: 98.9183% ( 3) 00:09:48.815 15265.414 - 15370.692: 98.9465% ( 4) 00:09:48.815 15370.692 - 15475.971: 98.9678% ( 3) 00:09:48.815 15475.971 - 15581.250: 98.9960% ( 4) 00:09:48.815 15581.250 - 15686.529: 99.0243% ( 4) 00:09:48.815 15686.529 - 15791.807: 99.0455% ( 3) 00:09:48.815 15791.807 - 15897.086: 99.0667% ( 3) 00:09:48.815 15897.086 - 16002.365: 99.0880% ( 3) 00:09:48.815 16002.365 - 16107.643: 99.0950% ( 1) 00:09:48.815 37900.337 - 38110.895: 99.1092% ( 2) 00:09:48.815 38110.895 - 38321.452: 99.1657% ( 8) 00:09:48.815 38321.452 - 38532.010: 99.2152% ( 7) 00:09:48.815 38532.010 - 38742.567: 99.2718% ( 8) 00:09:48.815 38742.567 - 38953.124: 99.3283% ( 8) 00:09:48.815 38953.124 - 39163.682: 99.3849% ( 8) 00:09:48.815 39163.682 - 39374.239: 99.4415% ( 8) 00:09:48.815 39374.239 - 39584.797: 99.4910% ( 7) 00:09:48.815 39584.797 - 39795.354: 99.5475% ( 8) 00:09:48.815 44006.503 - 44217.060: 99.5546% ( 1) 00:09:48.815 44217.060 - 44427.618: 99.6041% ( 7) 00:09:48.815 44427.618 - 44638.175: 99.6677% ( 9) 00:09:48.815 44638.175 - 44848.733: 99.7243% ( 8) 00:09:48.815 44848.733 - 45059.290: 99.7738% ( 7) 00:09:48.815 45059.290 - 45269.847: 99.8232% ( 7) 00:09:48.815 45269.847 - 45480.405: 99.8798% ( 8) 00:09:48.815 45480.405 - 45690.962: 99.9293% ( 7) 00:09:48.815 45690.962 - 45901.520: 99.9859% ( 8) 00:09:48.815 45901.520 - 46112.077: 100.0000% ( 2) 00:09:48.815 00:09:48.815 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:48.815 ============================================================================== 00:09:48.815 Range in us Cumulative IO count 00:09:48.815 7737.986 - 7790.625: 0.0071% ( 1) 00:09:48.815 7790.625 - 7843.264: 0.0566% ( 7) 00:09:48.815 7843.264 - 7895.904: 0.1555% ( 14) 00:09:48.815 7895.904 - 7948.543: 0.4383% ( 40) 00:09:48.815 7948.543 - 8001.182: 1.0817% ( 91) 00:09:48.815 8001.182 - 8053.822: 2.2907% ( 171) 00:09:48.815 8053.822 - 8106.461: 3.9876% ( 240) 00:09:48.815 8106.461 - 8159.100: 6.5257% ( 359) 00:09:48.815 8159.100 - 8211.740: 9.9901% ( 490) 00:09:48.815 8211.740 - 8264.379: 13.7938% ( 538) 00:09:48.815 8264.379 - 8317.018: 18.2410% ( 629) 00:09:48.815 8317.018 - 8369.658: 23.0345% ( 678) 00:09:48.815 8369.658 - 8422.297: 28.0896% ( 715) 00:09:48.815 8422.297 - 8474.937: 33.2014% ( 723) 00:09:48.815 8474.937 - 8527.576: 38.6383% ( 769) 00:09:48.815 8527.576 - 8580.215: 44.2096% ( 788) 00:09:48.815 8580.215 - 8632.855: 49.8727% ( 801) 00:09:48.815 8632.855 - 8685.494: 55.5925% ( 809) 00:09:48.815 8685.494 - 8738.133: 61.2627% ( 802) 00:09:48.815 8738.133 - 8790.773: 66.6714% ( 765) 00:09:48.815 8790.773 - 8843.412: 71.8821% ( 737) 00:09:48.815 8843.412 - 8896.051: 76.4494% ( 646) 00:09:48.815 8896.051 - 8948.691: 80.4723% ( 569) 00:09:48.815 8948.691 - 9001.330: 83.6326% ( 447) 00:09:48.815 9001.330 - 9053.969: 86.1213% ( 352) 00:09:48.815 9053.969 - 9106.609: 88.1151% ( 282) 00:09:48.815 9106.609 - 9159.248: 89.7624% ( 233) 00:09:48.816 9159.248 - 9211.888: 91.1482% ( 196) 00:09:48.816 9211.888 - 9264.527: 92.1875% ( 147) 00:09:48.816 9264.527 - 9317.166: 93.1915% ( 142) 00:09:48.816 9317.166 - 9369.806: 93.8702% ( 96) 00:09:48.816 9369.806 - 9422.445: 94.3298% ( 65) 00:09:48.816 9422.445 - 9475.084: 94.6691% ( 48) 00:09:48.816 9475.084 - 9527.724: 94.9449% ( 39) 00:09:48.816 9527.724 - 9580.363: 95.1782% ( 33) 00:09:48.816 9580.363 - 9633.002: 95.3903% ( 30) 00:09:48.816 9633.002 - 9685.642: 95.5600% ( 24) 00:09:48.816 9685.642 - 9738.281: 95.7438% ( 26) 00:09:48.816 9738.281 - 9790.920: 95.9276% ( 26) 00:09:48.816 9790.920 - 9843.560: 96.1185% ( 27) 00:09:48.816 9843.560 - 9896.199: 96.2458% ( 18) 00:09:48.816 9896.199 - 9948.839: 96.3306% ( 12) 00:09:48.816 9948.839 - 10001.478: 96.4296% ( 14) 00:09:48.816 10001.478 - 10054.117: 96.5003% ( 10) 00:09:48.816 10054.117 - 10106.757: 96.5498% ( 7) 00:09:48.816 10106.757 - 10159.396: 96.5993% ( 7) 00:09:48.816 10159.396 - 10212.035: 96.6558% ( 8) 00:09:48.816 10212.035 - 10264.675: 96.7053% ( 7) 00:09:48.816 10264.675 - 10317.314: 96.7548% ( 7) 00:09:48.816 10317.314 - 10369.953: 96.8043% ( 7) 00:09:48.816 10369.953 - 10422.593: 96.8609% ( 8) 00:09:48.816 10422.593 - 10475.232: 96.9104% ( 7) 00:09:48.816 10475.232 - 10527.871: 96.9669% ( 8) 00:09:48.816 10527.871 - 10580.511: 97.0093% ( 6) 00:09:48.816 10580.511 - 10633.150: 97.0376% ( 4) 00:09:48.816 10633.150 - 10685.790: 97.1012% ( 9) 00:09:48.816 10685.790 - 10738.429: 97.1507% ( 7) 00:09:48.816 10738.429 - 10791.068: 97.2073% ( 8) 00:09:48.816 10791.068 - 10843.708: 97.2568% ( 7) 00:09:48.816 10843.708 - 10896.347: 97.2921% ( 5) 00:09:48.816 10896.347 - 10948.986: 97.3275% ( 5) 00:09:48.816 10948.986 - 11001.626: 97.3628% ( 5) 00:09:48.816 11001.626 - 11054.265: 97.3982% ( 5) 00:09:48.816 11054.265 - 11106.904: 97.4265% ( 4) 00:09:48.816 11106.904 - 11159.544: 97.4618% ( 5) 00:09:48.816 11159.544 - 11212.183: 97.4972% ( 5) 00:09:48.816 11212.183 - 11264.822: 97.5396% ( 6) 00:09:48.816 11264.822 - 11317.462: 97.5679% ( 4) 00:09:48.816 11317.462 - 11370.101: 97.5962% ( 4) 00:09:48.816 11370.101 - 11422.741: 97.6315% ( 5) 00:09:48.816 11422.741 - 11475.380: 97.6598% ( 4) 00:09:48.816 11475.380 - 11528.019: 97.6881% ( 4) 00:09:48.816 11528.019 - 11580.659: 97.7234% ( 5) 00:09:48.816 11580.659 - 11633.298: 97.7517% ( 4) 00:09:48.816 11633.298 - 11685.937: 97.7800% ( 4) 00:09:48.816 11685.937 - 11738.577: 97.8153% ( 5) 00:09:48.816 11738.577 - 11791.216: 97.8507% ( 5) 00:09:48.816 11791.216 - 11843.855: 97.8719% ( 3) 00:09:48.816 11843.855 - 11896.495: 97.9002% ( 4) 00:09:48.816 11896.495 - 11949.134: 97.9143% ( 2) 00:09:48.816 11949.134 - 12001.773: 97.9285% ( 2) 00:09:48.816 12001.773 - 12054.413: 97.9426% ( 2) 00:09:48.816 12054.413 - 12107.052: 97.9497% ( 1) 00:09:48.816 12107.052 - 12159.692: 97.9638% ( 2) 00:09:48.816 12159.692 - 12212.331: 97.9779% ( 2) 00:09:48.816 12212.331 - 12264.970: 97.9921% ( 2) 00:09:48.816 12264.970 - 12317.610: 98.0062% ( 2) 00:09:48.816 12317.610 - 12370.249: 98.0133% ( 1) 00:09:48.816 12370.249 - 12422.888: 98.0274% ( 2) 00:09:48.816 12422.888 - 12475.528: 98.0416% ( 2) 00:09:48.816 12475.528 - 12528.167: 98.0557% ( 2) 00:09:48.816 12528.167 - 12580.806: 98.0699% ( 2) 00:09:48.816 12580.806 - 12633.446: 98.0769% ( 1) 00:09:48.816 12633.446 - 12686.085: 98.0911% ( 2) 00:09:48.816 12686.085 - 12738.724: 98.1052% ( 2) 00:09:48.816 12738.724 - 12791.364: 98.1193% ( 2) 00:09:48.816 12896.643 - 12949.282: 98.1335% ( 2) 00:09:48.816 12949.282 - 13001.921: 98.1406% ( 1) 00:09:48.816 13001.921 - 13054.561: 98.1547% ( 2) 00:09:48.816 13054.561 - 13107.200: 98.1688% ( 2) 00:09:48.816 13107.200 - 13159.839: 98.1759% ( 1) 00:09:48.816 13159.839 - 13212.479: 98.1900% ( 2) 00:09:48.816 13212.479 - 13265.118: 98.2042% ( 2) 00:09:48.816 13265.118 - 13317.757: 98.2183% ( 2) 00:09:48.816 13317.757 - 13370.397: 98.2325% ( 2) 00:09:48.816 13370.397 - 13423.036: 98.2466% ( 2) 00:09:48.816 13423.036 - 13475.676: 98.2607% ( 2) 00:09:48.816 13475.676 - 13580.954: 98.2890% ( 4) 00:09:48.816 13580.954 - 13686.233: 98.3385% ( 7) 00:09:48.816 13686.233 - 13791.512: 98.3951% ( 8) 00:09:48.816 13791.512 - 13896.790: 98.4446% ( 7) 00:09:48.816 13896.790 - 14002.069: 98.5011% ( 8) 00:09:48.816 14002.069 - 14107.348: 98.5648% ( 9) 00:09:48.816 14107.348 - 14212.627: 98.6143% ( 7) 00:09:48.816 14212.627 - 14317.905: 98.6708% ( 8) 00:09:48.816 14317.905 - 14423.184: 98.7203% ( 7) 00:09:48.816 14423.184 - 14528.463: 98.7769% ( 8) 00:09:48.816 14528.463 - 14633.741: 98.8264% ( 7) 00:09:48.816 14633.741 - 14739.020: 98.8829% ( 8) 00:09:48.816 14739.020 - 14844.299: 98.9324% ( 7) 00:09:48.816 14844.299 - 14949.578: 98.9536% ( 3) 00:09:48.816 14949.578 - 15054.856: 98.9819% ( 4) 00:09:48.816 15054.856 - 15160.135: 99.0031% ( 3) 00:09:48.816 15160.135 - 15265.414: 99.0314% ( 4) 00:09:48.816 15265.414 - 15370.692: 99.0597% ( 4) 00:09:48.816 15370.692 - 15475.971: 99.0809% ( 3) 00:09:48.816 15475.971 - 15581.250: 99.0950% ( 2) 00:09:48.816 36005.320 - 36215.878: 99.1445% ( 7) 00:09:48.816 36215.878 - 36426.435: 99.2011% ( 8) 00:09:48.816 36426.435 - 36636.993: 99.2576% ( 8) 00:09:48.816 36636.993 - 36847.550: 99.3071% ( 7) 00:09:48.816 36847.550 - 37058.108: 99.3637% ( 8) 00:09:48.816 37058.108 - 37268.665: 99.4202% ( 8) 00:09:48.816 37268.665 - 37479.222: 99.4768% ( 8) 00:09:48.816 37479.222 - 37689.780: 99.5334% ( 8) 00:09:48.816 37689.780 - 37900.337: 99.5475% ( 2) 00:09:48.816 42111.486 - 42322.043: 99.5899% ( 6) 00:09:48.816 42322.043 - 42532.601: 99.6465% ( 8) 00:09:48.816 42532.601 - 42743.158: 99.6960% ( 7) 00:09:48.816 42743.158 - 42953.716: 99.7384% ( 6) 00:09:48.816 42953.716 - 43164.273: 99.7879% ( 7) 00:09:48.816 43164.273 - 43374.831: 99.8374% ( 7) 00:09:48.816 43374.831 - 43585.388: 99.9010% ( 9) 00:09:48.816 43585.388 - 43795.945: 99.9505% ( 7) 00:09:48.816 43795.945 - 44006.503: 100.0000% ( 7) 00:09:48.816 00:09:48.816 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:48.816 ============================================================================== 00:09:48.816 Range in us Cumulative IO count 00:09:48.816 7737.986 - 7790.625: 0.0352% ( 5) 00:09:48.816 7790.625 - 7843.264: 0.0845% ( 7) 00:09:48.816 7843.264 - 7895.904: 0.1760% ( 13) 00:09:48.816 7895.904 - 7948.543: 0.4082% ( 33) 00:09:48.816 7948.543 - 8001.182: 1.1402% ( 104) 00:09:48.816 8001.182 - 8053.822: 2.3860% ( 177) 00:09:48.816 8053.822 - 8106.461: 4.1033% ( 244) 00:09:48.816 8106.461 - 8159.100: 6.5878% ( 353) 00:09:48.816 8159.100 - 8211.740: 10.0225% ( 488) 00:09:48.816 8211.740 - 8264.379: 13.7950% ( 536) 00:09:48.816 8264.379 - 8317.018: 18.3066% ( 641) 00:09:48.816 8317.018 - 8369.658: 23.0926% ( 680) 00:09:48.816 8369.658 - 8422.297: 28.1532% ( 719) 00:09:48.816 8422.297 - 8474.937: 33.2207% ( 720) 00:09:48.816 8474.937 - 8527.576: 38.5346% ( 755) 00:09:48.816 8527.576 - 8580.215: 44.0667% ( 786) 00:09:48.816 8580.215 - 8632.855: 49.7185% ( 803) 00:09:48.816 8632.855 - 8685.494: 55.3702% ( 803) 00:09:48.816 8685.494 - 8738.133: 61.0572% ( 808) 00:09:48.816 8738.133 - 8790.773: 66.5048% ( 774) 00:09:48.816 8790.773 - 8843.412: 71.4879% ( 708) 00:09:48.816 8843.412 - 8896.051: 76.1613% ( 664) 00:09:48.816 8896.051 - 8948.691: 80.2998% ( 588) 00:09:48.816 8948.691 - 9001.330: 83.5797% ( 466) 00:09:48.816 9001.330 - 9053.969: 86.1205% ( 361) 00:09:48.816 9053.969 - 9106.609: 88.1334% ( 286) 00:09:48.816 9106.609 - 9159.248: 89.6396% ( 214) 00:09:48.816 9159.248 - 9211.888: 90.8432% ( 171) 00:09:48.816 9211.888 - 9264.527: 91.8637% ( 145) 00:09:48.816 9264.527 - 9317.166: 92.7224% ( 122) 00:09:48.816 9317.166 - 9369.806: 93.4262% ( 100) 00:09:48.816 9369.806 - 9422.445: 93.9189% ( 70) 00:09:48.816 9422.445 - 9475.084: 94.2849% ( 52) 00:09:48.816 9475.084 - 9527.724: 94.5876% ( 43) 00:09:48.816 9527.724 - 9580.363: 94.8409% ( 36) 00:09:48.816 9580.363 - 9633.002: 95.0732% ( 33) 00:09:48.816 9633.002 - 9685.642: 95.2703% ( 28) 00:09:48.816 9685.642 - 9738.281: 95.4673% ( 28) 00:09:48.816 9738.281 - 9790.920: 95.6222% ( 22) 00:09:48.817 9790.920 - 9843.560: 95.7630% ( 20) 00:09:48.817 9843.560 - 9896.199: 95.8896% ( 18) 00:09:48.817 9896.199 - 9948.839: 96.0093% ( 17) 00:09:48.817 9948.839 - 10001.478: 96.0938% ( 12) 00:09:48.817 10001.478 - 10054.117: 96.1852% ( 13) 00:09:48.817 10054.117 - 10106.757: 96.2979% ( 16) 00:09:48.817 10106.757 - 10159.396: 96.3682% ( 10) 00:09:48.817 10159.396 - 10212.035: 96.4245% ( 8) 00:09:48.817 10212.035 - 10264.675: 96.4949% ( 10) 00:09:48.817 10264.675 - 10317.314: 96.5512% ( 8) 00:09:48.817 10317.314 - 10369.953: 96.5864% ( 5) 00:09:48.817 10369.953 - 10422.593: 96.6357% ( 7) 00:09:48.817 10422.593 - 10475.232: 96.6850% ( 7) 00:09:48.817 10475.232 - 10527.871: 96.7413% ( 8) 00:09:48.817 10527.871 - 10580.511: 96.7835% ( 6) 00:09:48.817 10580.511 - 10633.150: 96.8398% ( 8) 00:09:48.817 10633.150 - 10685.790: 96.8961% ( 8) 00:09:48.817 10685.790 - 10738.429: 96.9383% ( 6) 00:09:48.817 10738.429 - 10791.068: 96.9665% ( 4) 00:09:48.817 10791.068 - 10843.708: 97.0017% ( 5) 00:09:48.817 10843.708 - 10896.347: 97.0580% ( 8) 00:09:48.817 10896.347 - 10948.986: 97.1002% ( 6) 00:09:48.817 10948.986 - 11001.626: 97.1706% ( 10) 00:09:48.817 11001.626 - 11054.265: 97.2269% ( 8) 00:09:48.817 11054.265 - 11106.904: 97.2762% ( 7) 00:09:48.817 11106.904 - 11159.544: 97.3114% ( 5) 00:09:48.817 11159.544 - 11212.183: 97.3677% ( 8) 00:09:48.817 11212.183 - 11264.822: 97.4099% ( 6) 00:09:48.817 11264.822 - 11317.462: 97.4451% ( 5) 00:09:48.817 11317.462 - 11370.101: 97.5014% ( 8) 00:09:48.817 11370.101 - 11422.741: 97.5366% ( 5) 00:09:48.817 11422.741 - 11475.380: 97.5859% ( 7) 00:09:48.817 11475.380 - 11528.019: 97.6281% ( 6) 00:09:48.817 11528.019 - 11580.659: 97.6774% ( 7) 00:09:48.817 11580.659 - 11633.298: 97.7126% ( 5) 00:09:48.817 11633.298 - 11685.937: 97.7618% ( 7) 00:09:48.817 11685.937 - 11738.577: 97.8041% ( 6) 00:09:48.817 11738.577 - 11791.216: 97.8533% ( 7) 00:09:48.817 11791.216 - 11843.855: 97.9026% ( 7) 00:09:48.817 11843.855 - 11896.495: 97.9307% ( 4) 00:09:48.817 11896.495 - 11949.134: 97.9589% ( 4) 00:09:48.817 11949.134 - 12001.773: 97.9870% ( 4) 00:09:48.817 12001.773 - 12054.413: 98.0222% ( 5) 00:09:48.817 12054.413 - 12107.052: 98.0504% ( 4) 00:09:48.817 12107.052 - 12159.692: 98.0574% ( 1) 00:09:48.817 12159.692 - 12212.331: 98.0715% ( 2) 00:09:48.817 12212.331 - 12264.970: 98.0856% ( 2) 00:09:48.817 12264.970 - 12317.610: 98.0997% ( 2) 00:09:48.817 12317.610 - 12370.249: 98.1137% ( 2) 00:09:48.817 12370.249 - 12422.888: 98.1208% ( 1) 00:09:48.817 12422.888 - 12475.528: 98.1349% ( 2) 00:09:48.817 12475.528 - 12528.167: 98.1489% ( 2) 00:09:48.817 12528.167 - 12580.806: 98.1630% ( 2) 00:09:48.817 12580.806 - 12633.446: 98.1700% ( 1) 00:09:48.817 12633.446 - 12686.085: 98.1841% ( 2) 00:09:48.817 12686.085 - 12738.724: 98.1982% ( 2) 00:09:48.817 13159.839 - 13212.479: 98.2123% ( 2) 00:09:48.817 13212.479 - 13265.118: 98.2264% ( 2) 00:09:48.817 13265.118 - 13317.757: 98.2404% ( 2) 00:09:48.817 13317.757 - 13370.397: 98.2545% ( 2) 00:09:48.817 13370.397 - 13423.036: 98.2615% ( 1) 00:09:48.817 13423.036 - 13475.676: 98.2756% ( 2) 00:09:48.817 13475.676 - 13580.954: 98.3038% ( 4) 00:09:48.817 13580.954 - 13686.233: 98.3249% ( 3) 00:09:48.817 13686.233 - 13791.512: 98.3460% ( 3) 00:09:48.817 13791.512 - 13896.790: 98.4023% ( 8) 00:09:48.817 13896.790 - 14002.069: 98.4516% ( 7) 00:09:48.817 14002.069 - 14107.348: 98.5008% ( 7) 00:09:48.817 14107.348 - 14212.627: 98.5572% ( 8) 00:09:48.817 14212.627 - 14317.905: 98.6135% ( 8) 00:09:48.817 14317.905 - 14423.184: 98.6627% ( 7) 00:09:48.817 14423.184 - 14528.463: 98.7120% ( 7) 00:09:48.817 14528.463 - 14633.741: 98.7683% ( 8) 00:09:48.817 14633.741 - 14739.020: 98.8246% ( 8) 00:09:48.817 14739.020 - 14844.299: 98.8739% ( 7) 00:09:48.817 14844.299 - 14949.578: 98.9231% ( 7) 00:09:48.817 14949.578 - 15054.856: 98.9794% ( 8) 00:09:48.817 15054.856 - 15160.135: 99.0217% ( 6) 00:09:48.817 15160.135 - 15265.414: 99.0498% ( 4) 00:09:48.817 15265.414 - 15370.692: 99.0850% ( 5) 00:09:48.817 15370.692 - 15475.971: 99.0991% ( 2) 00:09:48.817 28846.368 - 29056.925: 99.1061% ( 1) 00:09:48.817 29056.925 - 29267.483: 99.1554% ( 7) 00:09:48.817 29267.483 - 29478.040: 99.2117% ( 8) 00:09:48.817 29478.040 - 29688.598: 99.2751% ( 9) 00:09:48.817 29688.598 - 29899.155: 99.3243% ( 7) 00:09:48.817 29899.155 - 30109.712: 99.3806% ( 8) 00:09:48.817 30109.712 - 30320.270: 99.4440% ( 9) 00:09:48.817 30320.270 - 30530.827: 99.4932% ( 7) 00:09:48.817 30530.827 - 30741.385: 99.5495% ( 8) 00:09:48.817 35373.648 - 35584.206: 99.5918% ( 6) 00:09:48.817 35584.206 - 35794.763: 99.6481% ( 8) 00:09:48.817 35794.763 - 36005.320: 99.7044% ( 8) 00:09:48.817 36005.320 - 36215.878: 99.7607% ( 8) 00:09:48.817 36215.878 - 36426.435: 99.8170% ( 8) 00:09:48.817 36426.435 - 36636.993: 99.8663% ( 7) 00:09:48.817 36636.993 - 36847.550: 99.9226% ( 8) 00:09:48.817 36847.550 - 37058.108: 99.9718% ( 7) 00:09:48.817 37058.108 - 37268.665: 100.0000% ( 4) 00:09:48.817 00:09:48.817 11:53:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:50.199 Initializing NVMe Controllers 00:09:50.199 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.199 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.199 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.199 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.199 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:50.199 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:50.199 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:50.199 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:50.199 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:50.199 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:50.199 Initialization complete. Launching workers. 00:09:50.199 ======================================================== 00:09:50.199 Latency(us) 00:09:50.199 Device Information : IOPS MiB/s Average min max 00:09:50.199 PCIE (0000:00:10.0) NSID 1 from core 0: 10843.75 127.08 11832.06 8173.29 41661.31 00:09:50.199 PCIE (0000:00:11.0) NSID 1 from core 0: 10843.75 127.08 11813.89 8506.14 39926.58 00:09:50.199 PCIE (0000:00:13.0) NSID 1 from core 0: 10843.75 127.08 11795.07 8452.35 38968.96 00:09:50.199 PCIE (0000:00:12.0) NSID 1 from core 0: 10843.75 127.08 11777.44 8456.91 37246.14 00:09:50.199 PCIE (0000:00:12.0) NSID 2 from core 0: 10843.75 127.08 11759.82 8578.09 35570.51 00:09:50.199 PCIE (0000:00:12.0) NSID 3 from core 0: 10907.54 127.82 11673.36 8382.58 27615.90 00:09:50.199 ======================================================== 00:09:50.199 Total : 65126.28 763.20 11775.17 8173.29 41661.31 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8738.133us 00:09:50.199 10.00000% : 9369.806us 00:09:50.199 25.00000% : 9738.281us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 13265.118us 00:09:50.199 90.00000% : 15686.529us 00:09:50.199 95.00000% : 17370.988us 00:09:50.199 98.00000% : 18634.333us 00:09:50.199 99.00000% : 32636.402us 00:09:50.199 99.50000% : 40216.469us 00:09:50.199 99.90000% : 41479.814us 00:09:50.199 99.99000% : 41690.371us 00:09:50.199 99.99900% : 41690.371us 00:09:50.199 99.99990% : 41690.371us 00:09:50.199 99.99999% : 41690.371us 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8896.051us 00:09:50.199 10.00000% : 9369.806us 00:09:50.199 25.00000% : 9738.281us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 13212.479us 00:09:50.199 90.00000% : 15791.807us 00:09:50.199 95.00000% : 17370.988us 00:09:50.199 98.00000% : 18950.169us 00:09:50.199 99.00000% : 30951.942us 00:09:50.199 99.50000% : 38532.010us 00:09:50.199 99.90000% : 39795.354us 00:09:50.199 99.99000% : 40005.912us 00:09:50.199 99.99900% : 40005.912us 00:09:50.199 99.99990% : 40005.912us 00:09:50.199 99.99999% : 40005.912us 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8896.051us 00:09:50.199 10.00000% : 9369.806us 00:09:50.199 25.00000% : 9738.281us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 13054.561us 00:09:50.199 90.00000% : 15897.086us 00:09:50.199 95.00000% : 17476.267us 00:09:50.199 98.00000% : 18844.890us 00:09:50.199 99.00000% : 30109.712us 00:09:50.199 99.50000% : 37689.780us 00:09:50.199 99.90000% : 38742.567us 00:09:50.199 99.99000% : 38953.124us 00:09:50.199 99.99900% : 39163.682us 00:09:50.199 99.99990% : 39163.682us 00:09:50.199 99.99999% : 39163.682us 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8843.412us 00:09:50.199 10.00000% : 9369.806us 00:09:50.199 25.00000% : 9738.281us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 12896.643us 00:09:50.199 90.00000% : 15897.086us 00:09:50.199 95.00000% : 17370.988us 00:09:50.199 98.00000% : 18739.611us 00:09:50.199 99.00000% : 28425.253us 00:09:50.199 99.50000% : 35794.763us 00:09:50.199 99.90000% : 37058.108us 00:09:50.199 99.99000% : 37268.665us 00:09:50.199 99.99900% : 37268.665us 00:09:50.199 99.99990% : 37268.665us 00:09:50.199 99.99999% : 37268.665us 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8948.691us 00:09:50.199 10.00000% : 9317.166us 00:09:50.199 25.00000% : 9738.281us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 13054.561us 00:09:50.199 90.00000% : 16002.365us 00:09:50.199 95.00000% : 17160.431us 00:09:50.199 98.00000% : 18107.939us 00:09:50.199 99.00000% : 27161.908us 00:09:50.199 99.50000% : 34110.304us 00:09:50.199 99.90000% : 35373.648us 00:09:50.199 99.99000% : 35584.206us 00:09:50.199 99.99900% : 35584.206us 00:09:50.199 99.99990% : 35584.206us 00:09:50.199 99.99999% : 35584.206us 00:09:50.199 00:09:50.199 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:50.199 ================================================================================= 00:09:50.199 1.00000% : 8896.051us 00:09:50.199 10.00000% : 9369.806us 00:09:50.199 25.00000% : 9790.920us 00:09:50.199 50.00000% : 10527.871us 00:09:50.199 75.00000% : 13107.200us 00:09:50.199 90.00000% : 15686.529us 00:09:50.199 95.00000% : 17370.988us 00:09:50.199 98.00000% : 18318.496us 00:09:50.199 99.00000% : 19160.726us 00:09:50.199 99.50000% : 26319.679us 00:09:50.199 99.90000% : 27372.466us 00:09:50.199 99.99000% : 27793.581us 00:09:50.199 99.99900% : 27793.581us 00:09:50.199 99.99990% : 27793.581us 00:09:50.199 99.99999% : 27793.581us 00:09:50.199 00:09:50.199 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:50.199 ============================================================================== 00:09:50.199 Range in us Cumulative IO count 00:09:50.199 8159.100 - 8211.740: 0.0092% ( 1) 00:09:50.199 8369.658 - 8422.297: 0.1103% ( 11) 00:09:50.199 8422.297 - 8474.937: 0.2206% ( 12) 00:09:50.199 8474.937 - 8527.576: 0.3217% ( 11) 00:09:50.199 8527.576 - 8580.215: 0.3768% ( 6) 00:09:50.199 8580.215 - 8632.855: 0.4320% ( 6) 00:09:50.199 8632.855 - 8685.494: 0.8180% ( 42) 00:09:50.199 8685.494 - 8738.133: 1.2316% ( 45) 00:09:50.199 8738.133 - 8790.773: 1.5717% ( 37) 00:09:50.199 8790.773 - 8843.412: 1.8658% ( 32) 00:09:50.199 8843.412 - 8896.051: 2.3897% ( 57) 00:09:50.199 8896.051 - 8948.691: 3.0147% ( 68) 00:09:50.199 8948.691 - 9001.330: 3.4375% ( 46) 00:09:50.199 9001.330 - 9053.969: 4.1452% ( 77) 00:09:50.199 9053.969 - 9106.609: 5.0276% ( 96) 00:09:50.199 9106.609 - 9159.248: 5.8180% ( 86) 00:09:50.199 9159.248 - 9211.888: 7.0037% ( 129) 00:09:50.199 9211.888 - 9264.527: 8.4099% ( 153) 00:09:50.199 9264.527 - 9317.166: 9.9173% ( 164) 00:09:50.199 9317.166 - 9369.806: 11.5717% ( 180) 00:09:50.199 9369.806 - 9422.445: 13.4191% ( 201) 00:09:50.199 9422.445 - 9475.084: 15.1930% ( 193) 00:09:50.199 9475.084 - 9527.724: 16.7647% ( 171) 00:09:50.199 9527.724 - 9580.363: 18.7040% ( 211) 00:09:50.199 9580.363 - 9633.002: 20.8456% ( 233) 00:09:50.199 9633.002 - 9685.642: 23.0790% ( 243) 00:09:50.199 9685.642 - 9738.281: 25.0551% ( 215) 00:09:50.200 9738.281 - 9790.920: 27.3346% ( 248) 00:09:50.200 9790.920 - 9843.560: 29.6691% ( 254) 00:09:50.200 9843.560 - 9896.199: 31.6544% ( 216) 00:09:50.200 9896.199 - 9948.839: 33.8971% ( 244) 00:09:50.200 9948.839 - 10001.478: 35.8732% ( 215) 00:09:50.200 10001.478 - 10054.117: 37.4724% ( 174) 00:09:50.200 10054.117 - 10106.757: 38.8879% ( 154) 00:09:50.200 10106.757 - 10159.396: 40.2941% ( 153) 00:09:50.200 10159.396 - 10212.035: 41.6452% ( 147) 00:09:50.200 10212.035 - 10264.675: 43.2904% ( 179) 00:09:50.200 10264.675 - 10317.314: 44.9081% ( 176) 00:09:50.200 10317.314 - 10369.953: 46.5257% ( 176) 00:09:50.200 10369.953 - 10422.593: 47.9044% ( 150) 00:09:50.200 10422.593 - 10475.232: 49.5864% ( 183) 00:09:50.200 10475.232 - 10527.871: 50.8640% ( 139) 00:09:50.200 10527.871 - 10580.511: 51.8382% ( 106) 00:09:50.200 10580.511 - 10633.150: 52.7757% ( 102) 00:09:50.200 10633.150 - 10685.790: 53.9798% ( 131) 00:09:50.200 10685.790 - 10738.429: 55.1930% ( 132) 00:09:50.200 10738.429 - 10791.068: 56.2776% ( 118) 00:09:50.200 10791.068 - 10843.708: 57.4173% ( 124) 00:09:50.200 10843.708 - 10896.347: 58.0882% ( 73) 00:09:50.200 10896.347 - 10948.986: 58.6673% ( 63) 00:09:50.200 10948.986 - 11001.626: 59.4945% ( 90) 00:09:50.200 11001.626 - 11054.265: 60.2114% ( 78) 00:09:50.200 11054.265 - 11106.904: 60.9007% ( 75) 00:09:50.200 11106.904 - 11159.544: 61.4430% ( 59) 00:09:50.200 11159.544 - 11212.183: 61.9026% ( 50) 00:09:50.200 11212.183 - 11264.822: 62.3713% ( 51) 00:09:50.200 11264.822 - 11317.462: 62.8493% ( 52) 00:09:50.200 11317.462 - 11370.101: 63.4099% ( 61) 00:09:50.200 11370.101 - 11422.741: 63.8051% ( 43) 00:09:50.200 11422.741 - 11475.380: 64.3658% ( 61) 00:09:50.200 11475.380 - 11528.019: 64.8070% ( 48) 00:09:50.200 11528.019 - 11580.659: 65.1103% ( 33) 00:09:50.200 11580.659 - 11633.298: 65.4779% ( 40) 00:09:50.200 11633.298 - 11685.937: 65.8180% ( 37) 00:09:50.200 11685.937 - 11738.577: 66.0754% ( 28) 00:09:50.200 11738.577 - 11791.216: 66.3695% ( 32) 00:09:50.200 11791.216 - 11843.855: 66.7096% ( 37) 00:09:50.200 11843.855 - 11896.495: 67.1140% ( 44) 00:09:50.200 11896.495 - 11949.134: 67.5643% ( 49) 00:09:50.200 11949.134 - 12001.773: 67.9228% ( 39) 00:09:50.200 12001.773 - 12054.413: 68.5662% ( 70) 00:09:50.200 12054.413 - 12107.052: 68.9522% ( 42) 00:09:50.200 12107.052 - 12159.692: 69.4118% ( 50) 00:09:50.200 12159.692 - 12212.331: 69.7518% ( 37) 00:09:50.200 12212.331 - 12264.970: 70.0276% ( 30) 00:09:50.200 12264.970 - 12317.610: 70.3585% ( 36) 00:09:50.200 12317.610 - 12370.249: 70.5790% ( 24) 00:09:50.200 12370.249 - 12422.888: 70.7812% ( 22) 00:09:50.200 12422.888 - 12475.528: 71.0294% ( 27) 00:09:50.200 12475.528 - 12528.167: 71.2960% ( 29) 00:09:50.200 12528.167 - 12580.806: 71.6452% ( 38) 00:09:50.200 12580.806 - 12633.446: 71.9393% ( 32) 00:09:50.200 12633.446 - 12686.085: 72.3529% ( 45) 00:09:50.200 12686.085 - 12738.724: 72.7574% ( 44) 00:09:50.200 12738.724 - 12791.364: 72.9963% ( 26) 00:09:50.200 12791.364 - 12844.003: 73.1801% ( 20) 00:09:50.200 12844.003 - 12896.643: 73.3364% ( 17) 00:09:50.200 12896.643 - 12949.282: 73.6489% ( 34) 00:09:50.200 12949.282 - 13001.921: 73.9614% ( 34) 00:09:50.200 13001.921 - 13054.561: 74.1636% ( 22) 00:09:50.200 13054.561 - 13107.200: 74.3658% ( 22) 00:09:50.200 13107.200 - 13159.839: 74.6507% ( 31) 00:09:50.200 13159.839 - 13212.479: 74.8897% ( 26) 00:09:50.200 13212.479 - 13265.118: 75.3860% ( 54) 00:09:50.200 13265.118 - 13317.757: 75.8640% ( 52) 00:09:50.200 13317.757 - 13370.397: 76.3235% ( 50) 00:09:50.200 13370.397 - 13423.036: 76.6728% ( 38) 00:09:50.200 13423.036 - 13475.676: 77.1140% ( 48) 00:09:50.200 13475.676 - 13580.954: 77.5643% ( 49) 00:09:50.200 13580.954 - 13686.233: 78.3456% ( 85) 00:09:50.200 13686.233 - 13791.512: 79.2279% ( 96) 00:09:50.200 13791.512 - 13896.790: 80.2114% ( 107) 00:09:50.200 13896.790 - 14002.069: 80.7904% ( 63) 00:09:50.200 14002.069 - 14107.348: 81.3511% ( 61) 00:09:50.200 14107.348 - 14212.627: 81.8842% ( 58) 00:09:50.200 14212.627 - 14317.905: 82.4632% ( 63) 00:09:50.200 14317.905 - 14423.184: 83.1710% ( 77) 00:09:50.200 14423.184 - 14528.463: 84.0257% ( 93) 00:09:50.200 14528.463 - 14633.741: 84.5404% ( 56) 00:09:50.200 14633.741 - 14739.020: 85.0368% ( 54) 00:09:50.200 14739.020 - 14844.299: 85.6250% ( 64) 00:09:50.200 14844.299 - 14949.578: 86.2500% ( 68) 00:09:50.200 14949.578 - 15054.856: 86.8107% ( 61) 00:09:50.200 15054.856 - 15160.135: 87.4724% ( 72) 00:09:50.200 15160.135 - 15265.414: 88.0974% ( 68) 00:09:50.200 15265.414 - 15370.692: 88.7592% ( 72) 00:09:50.200 15370.692 - 15475.971: 89.2188% ( 50) 00:09:50.200 15475.971 - 15581.250: 89.7151% ( 54) 00:09:50.200 15581.250 - 15686.529: 90.1654% ( 49) 00:09:50.200 15686.529 - 15791.807: 90.4871% ( 35) 00:09:50.200 15791.807 - 15897.086: 90.8088% ( 35) 00:09:50.200 15897.086 - 16002.365: 90.9926% ( 20) 00:09:50.200 16002.365 - 16107.643: 91.1581% ( 18) 00:09:50.200 16107.643 - 16212.922: 91.4430% ( 31) 00:09:50.200 16212.922 - 16318.201: 91.6912% ( 27) 00:09:50.200 16318.201 - 16423.480: 91.9853% ( 32) 00:09:50.200 16423.480 - 16528.758: 92.3254% ( 37) 00:09:50.200 16528.758 - 16634.037: 92.6287% ( 33) 00:09:50.200 16634.037 - 16739.316: 92.9320% ( 33) 00:09:50.200 16739.316 - 16844.594: 93.2261% ( 32) 00:09:50.200 16844.594 - 16949.873: 93.6121% ( 42) 00:09:50.200 16949.873 - 17055.152: 93.9154% ( 33) 00:09:50.200 17055.152 - 17160.431: 94.2188% ( 33) 00:09:50.200 17160.431 - 17265.709: 94.5772% ( 39) 00:09:50.200 17265.709 - 17370.988: 95.0092% ( 47) 00:09:50.200 17370.988 - 17476.267: 95.4136% ( 44) 00:09:50.200 17476.267 - 17581.545: 95.7537% ( 37) 00:09:50.200 17581.545 - 17686.824: 96.0018% ( 27) 00:09:50.200 17686.824 - 17792.103: 96.3051% ( 33) 00:09:50.200 17792.103 - 17897.382: 96.5901% ( 31) 00:09:50.200 17897.382 - 18002.660: 96.9761% ( 42) 00:09:50.200 18002.660 - 18107.939: 97.2518% ( 30) 00:09:50.200 18107.939 - 18213.218: 97.4724% ( 24) 00:09:50.200 18213.218 - 18318.496: 97.7114% ( 26) 00:09:50.200 18318.496 - 18423.775: 97.8125% ( 11) 00:09:50.200 18423.775 - 18529.054: 97.9963% ( 20) 00:09:50.200 18529.054 - 18634.333: 98.1066% ( 12) 00:09:50.200 18634.333 - 18739.611: 98.1342% ( 3) 00:09:50.200 18739.611 - 18844.890: 98.1618% ( 3) 00:09:50.200 18844.890 - 18950.169: 98.3364% ( 19) 00:09:50.200 18950.169 - 19055.447: 98.4743% ( 15) 00:09:50.200 19055.447 - 19160.726: 98.5846% ( 12) 00:09:50.200 19160.726 - 19266.005: 98.6673% ( 9) 00:09:50.200 19266.005 - 19371.284: 98.7040% ( 4) 00:09:50.200 19371.284 - 19476.562: 98.7132% ( 1) 00:09:50.200 19476.562 - 19581.841: 98.7500% ( 4) 00:09:50.200 19581.841 - 19687.120: 98.7868% ( 4) 00:09:50.200 19687.120 - 19792.398: 98.8235% ( 4) 00:09:50.200 32215.287 - 32425.844: 98.9614% ( 15) 00:09:50.200 32425.844 - 32636.402: 99.0809% ( 13) 00:09:50.200 32636.402 - 32846.959: 99.1176% ( 4) 00:09:50.200 32846.959 - 33057.516: 99.1636% ( 5) 00:09:50.200 33057.516 - 33268.074: 99.2188% ( 6) 00:09:50.200 33268.074 - 33478.631: 99.2831% ( 7) 00:09:50.200 33478.631 - 33689.189: 99.3474% ( 7) 00:09:50.200 33689.189 - 33899.746: 99.4026% ( 6) 00:09:50.200 33899.746 - 34110.304: 99.4118% ( 1) 00:09:50.200 39584.797 - 39795.354: 99.4301% ( 2) 00:09:50.200 39795.354 - 40005.912: 99.4945% ( 7) 00:09:50.200 40005.912 - 40216.469: 99.5588% ( 7) 00:09:50.200 40216.469 - 40427.027: 99.6140% ( 6) 00:09:50.200 40427.027 - 40637.584: 99.6967% ( 9) 00:09:50.200 40637.584 - 40848.141: 99.7426% ( 5) 00:09:50.200 40848.141 - 41058.699: 99.8070% ( 7) 00:09:50.200 41058.699 - 41269.256: 99.8713% ( 7) 00:09:50.200 41269.256 - 41479.814: 99.9540% ( 9) 00:09:50.200 41479.814 - 41690.371: 100.0000% ( 5) 00:09:50.200 00:09:50.200 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:50.200 ============================================================================== 00:09:50.200 Range in us Cumulative IO count 00:09:50.200 8474.937 - 8527.576: 0.0368% ( 4) 00:09:50.200 8527.576 - 8580.215: 0.0827% ( 5) 00:09:50.200 8580.215 - 8632.855: 0.1287% ( 5) 00:09:50.200 8632.855 - 8685.494: 0.2206% ( 10) 00:09:50.200 8685.494 - 8738.133: 0.3952% ( 19) 00:09:50.200 8738.133 - 8790.773: 0.5515% ( 17) 00:09:50.200 8790.773 - 8843.412: 0.8088% ( 28) 00:09:50.200 8843.412 - 8896.051: 1.1949% ( 42) 00:09:50.200 8896.051 - 8948.691: 1.8015% ( 66) 00:09:50.200 8948.691 - 9001.330: 2.6654% ( 94) 00:09:50.200 9001.330 - 9053.969: 3.4467% ( 85) 00:09:50.200 9053.969 - 9106.609: 4.4669% ( 111) 00:09:50.200 9106.609 - 9159.248: 5.5607% ( 119) 00:09:50.200 9159.248 - 9211.888: 6.7831% ( 133) 00:09:50.200 9211.888 - 9264.527: 7.8860% ( 120) 00:09:50.200 9264.527 - 9317.166: 9.1912% ( 142) 00:09:50.200 9317.166 - 9369.806: 10.6893% ( 163) 00:09:50.200 9369.806 - 9422.445: 12.7574% ( 225) 00:09:50.200 9422.445 - 9475.084: 14.6140% ( 202) 00:09:50.200 9475.084 - 9527.724: 16.5809% ( 214) 00:09:50.200 9527.724 - 9580.363: 18.6213% ( 222) 00:09:50.200 9580.363 - 9633.002: 20.8456% ( 242) 00:09:50.200 9633.002 - 9685.642: 23.2445% ( 261) 00:09:50.200 9685.642 - 9738.281: 25.5055% ( 246) 00:09:50.200 9738.281 - 9790.920: 28.0331% ( 275) 00:09:50.200 9790.920 - 9843.560: 30.3860% ( 256) 00:09:50.200 9843.560 - 9896.199: 32.7849% ( 261) 00:09:50.200 9896.199 - 9948.839: 34.9449% ( 235) 00:09:50.200 9948.839 - 10001.478: 36.6636% ( 187) 00:09:50.200 10001.478 - 10054.117: 38.3824% ( 187) 00:09:50.200 10054.117 - 10106.757: 39.8805% ( 163) 00:09:50.200 10106.757 - 10159.396: 41.1673% ( 140) 00:09:50.200 10159.396 - 10212.035: 42.3989% ( 134) 00:09:50.200 10212.035 - 10264.675: 43.7040% ( 142) 00:09:50.200 10264.675 - 10317.314: 44.9173% ( 132) 00:09:50.201 10317.314 - 10369.953: 46.1305% ( 132) 00:09:50.201 10369.953 - 10422.593: 47.6287% ( 163) 00:09:50.201 10422.593 - 10475.232: 49.3015% ( 182) 00:09:50.201 10475.232 - 10527.871: 50.6893% ( 151) 00:09:50.201 10527.871 - 10580.511: 52.2335% ( 168) 00:09:50.201 10580.511 - 10633.150: 53.5938% ( 148) 00:09:50.201 10633.150 - 10685.790: 54.9173% ( 144) 00:09:50.201 10685.790 - 10738.429: 56.0478% ( 123) 00:09:50.201 10738.429 - 10791.068: 57.0956% ( 114) 00:09:50.201 10791.068 - 10843.708: 58.1434% ( 114) 00:09:50.201 10843.708 - 10896.347: 59.1360% ( 108) 00:09:50.201 10896.347 - 10948.986: 59.9816% ( 92) 00:09:50.201 10948.986 - 11001.626: 60.6434% ( 72) 00:09:50.201 11001.626 - 11054.265: 61.3971% ( 82) 00:09:50.201 11054.265 - 11106.904: 61.9669% ( 62) 00:09:50.201 11106.904 - 11159.544: 62.5827% ( 67) 00:09:50.201 11159.544 - 11212.183: 63.1158% ( 58) 00:09:50.201 11212.183 - 11264.822: 63.4835% ( 40) 00:09:50.201 11264.822 - 11317.462: 63.8603% ( 41) 00:09:50.201 11317.462 - 11370.101: 64.2923% ( 47) 00:09:50.201 11370.101 - 11422.741: 64.6048% ( 34) 00:09:50.201 11422.741 - 11475.380: 64.9265% ( 35) 00:09:50.201 11475.380 - 11528.019: 65.2298% ( 33) 00:09:50.201 11528.019 - 11580.659: 65.5423% ( 34) 00:09:50.201 11580.659 - 11633.298: 65.9099% ( 40) 00:09:50.201 11633.298 - 11685.937: 66.4522% ( 59) 00:09:50.201 11685.937 - 11738.577: 66.9393% ( 53) 00:09:50.201 11738.577 - 11791.216: 67.2702% ( 36) 00:09:50.201 11791.216 - 11843.855: 67.5460% ( 30) 00:09:50.201 11843.855 - 11896.495: 67.9136% ( 40) 00:09:50.201 11896.495 - 11949.134: 68.2629% ( 38) 00:09:50.201 11949.134 - 12001.773: 68.5938% ( 36) 00:09:50.201 12001.773 - 12054.413: 69.2004% ( 66) 00:09:50.201 12054.413 - 12107.052: 69.6324% ( 47) 00:09:50.201 12107.052 - 12159.692: 70.1195% ( 53) 00:09:50.201 12159.692 - 12212.331: 70.5331% ( 45) 00:09:50.201 12212.331 - 12264.970: 70.9375% ( 44) 00:09:50.201 12264.970 - 12317.610: 71.3603% ( 46) 00:09:50.201 12317.610 - 12370.249: 71.6636% ( 33) 00:09:50.201 12370.249 - 12422.888: 72.0129% ( 38) 00:09:50.201 12422.888 - 12475.528: 72.3529% ( 37) 00:09:50.201 12475.528 - 12528.167: 72.5551% ( 22) 00:09:50.201 12528.167 - 12580.806: 72.7574% ( 22) 00:09:50.201 12580.806 - 12633.446: 72.9320% ( 19) 00:09:50.201 12633.446 - 12686.085: 73.1342% ( 22) 00:09:50.201 12686.085 - 12738.724: 73.2904% ( 17) 00:09:50.201 12738.724 - 12791.364: 73.5294% ( 26) 00:09:50.201 12791.364 - 12844.003: 73.6489% ( 13) 00:09:50.201 12844.003 - 12896.643: 73.7684% ( 13) 00:09:50.201 12896.643 - 12949.282: 73.9154% ( 16) 00:09:50.201 12949.282 - 13001.921: 74.0993% ( 20) 00:09:50.201 13001.921 - 13054.561: 74.3474% ( 27) 00:09:50.201 13054.561 - 13107.200: 74.6140% ( 29) 00:09:50.201 13107.200 - 13159.839: 74.9081% ( 32) 00:09:50.201 13159.839 - 13212.479: 75.1838% ( 30) 00:09:50.201 13212.479 - 13265.118: 75.3768% ( 21) 00:09:50.201 13265.118 - 13317.757: 75.5882% ( 23) 00:09:50.201 13317.757 - 13370.397: 75.8824% ( 32) 00:09:50.201 13370.397 - 13423.036: 76.2040% ( 35) 00:09:50.201 13423.036 - 13475.676: 76.6360% ( 47) 00:09:50.201 13475.676 - 13580.954: 77.4540% ( 89) 00:09:50.201 13580.954 - 13686.233: 78.3364% ( 96) 00:09:50.201 13686.233 - 13791.512: 79.0257% ( 75) 00:09:50.201 13791.512 - 13896.790: 79.7426% ( 78) 00:09:50.201 13896.790 - 14002.069: 80.4596% ( 78) 00:09:50.201 14002.069 - 14107.348: 81.1305% ( 73) 00:09:50.201 14107.348 - 14212.627: 81.7463% ( 67) 00:09:50.201 14212.627 - 14317.905: 82.2702% ( 57) 00:09:50.201 14317.905 - 14423.184: 82.6746% ( 44) 00:09:50.201 14423.184 - 14528.463: 83.0790% ( 44) 00:09:50.201 14528.463 - 14633.741: 83.6029% ( 57) 00:09:50.201 14633.741 - 14739.020: 84.2739% ( 73) 00:09:50.201 14739.020 - 14844.299: 84.7518% ( 52) 00:09:50.201 14844.299 - 14949.578: 85.2849% ( 58) 00:09:50.201 14949.578 - 15054.856: 85.7812% ( 54) 00:09:50.201 15054.856 - 15160.135: 86.2868% ( 55) 00:09:50.201 15160.135 - 15265.414: 86.8382% ( 60) 00:09:50.201 15265.414 - 15370.692: 87.4724% ( 69) 00:09:50.201 15370.692 - 15475.971: 88.0699% ( 65) 00:09:50.201 15475.971 - 15581.250: 88.7684% ( 76) 00:09:50.201 15581.250 - 15686.529: 89.3934% ( 68) 00:09:50.201 15686.529 - 15791.807: 90.0184% ( 68) 00:09:50.201 15791.807 - 15897.086: 90.8272% ( 88) 00:09:50.201 15897.086 - 16002.365: 91.2316% ( 44) 00:09:50.201 16002.365 - 16107.643: 91.6728% ( 48) 00:09:50.201 16107.643 - 16212.922: 92.0129% ( 37) 00:09:50.201 16212.922 - 16318.201: 92.3529% ( 37) 00:09:50.201 16318.201 - 16423.480: 92.6103% ( 28) 00:09:50.201 16423.480 - 16528.758: 92.9044% ( 32) 00:09:50.201 16528.758 - 16634.037: 93.1618% ( 28) 00:09:50.201 16634.037 - 16739.316: 93.4283% ( 29) 00:09:50.201 16739.316 - 16844.594: 93.6581% ( 25) 00:09:50.201 16844.594 - 16949.873: 93.9154% ( 28) 00:09:50.201 16949.873 - 17055.152: 94.2555% ( 37) 00:09:50.201 17055.152 - 17160.431: 94.5312% ( 30) 00:09:50.201 17160.431 - 17265.709: 94.7886% ( 28) 00:09:50.201 17265.709 - 17370.988: 95.1011% ( 34) 00:09:50.201 17370.988 - 17476.267: 95.3309% ( 25) 00:09:50.201 17476.267 - 17581.545: 95.5239% ( 21) 00:09:50.201 17581.545 - 17686.824: 95.7169% ( 21) 00:09:50.201 17686.824 - 17792.103: 95.8915% ( 19) 00:09:50.201 17792.103 - 17897.382: 96.0478% ( 17) 00:09:50.201 17897.382 - 18002.660: 96.1857% ( 15) 00:09:50.201 18002.660 - 18107.939: 96.3419% ( 17) 00:09:50.201 18107.939 - 18213.218: 96.4982% ( 17) 00:09:50.201 18213.218 - 18318.496: 96.7279% ( 25) 00:09:50.201 18318.496 - 18423.775: 96.9577% ( 25) 00:09:50.201 18423.775 - 18529.054: 97.1967% ( 26) 00:09:50.201 18529.054 - 18634.333: 97.4265% ( 25) 00:09:50.201 18634.333 - 18739.611: 97.6287% ( 22) 00:09:50.201 18739.611 - 18844.890: 97.8033% ( 19) 00:09:50.201 18844.890 - 18950.169: 98.0515% ( 27) 00:09:50.201 18950.169 - 19055.447: 98.2629% ( 23) 00:09:50.201 19055.447 - 19160.726: 98.3732% ( 12) 00:09:50.201 19160.726 - 19266.005: 98.4835% ( 12) 00:09:50.201 19266.005 - 19371.284: 98.5754% ( 10) 00:09:50.201 19371.284 - 19476.562: 98.6397% ( 7) 00:09:50.201 19476.562 - 19581.841: 98.6949% ( 6) 00:09:50.201 19581.841 - 19687.120: 98.7500% ( 6) 00:09:50.201 19687.120 - 19792.398: 98.8051% ( 6) 00:09:50.201 19792.398 - 19897.677: 98.8235% ( 2) 00:09:50.201 30109.712 - 30320.270: 98.8603% ( 4) 00:09:50.201 30320.270 - 30530.827: 98.9338% ( 8) 00:09:50.201 30530.827 - 30741.385: 98.9982% ( 7) 00:09:50.201 30741.385 - 30951.942: 99.0717% ( 8) 00:09:50.201 30951.942 - 31162.500: 99.1360% ( 7) 00:09:50.201 31162.500 - 31373.057: 99.2096% ( 8) 00:09:50.201 31373.057 - 31583.614: 99.2831% ( 8) 00:09:50.201 31583.614 - 31794.172: 99.3566% ( 8) 00:09:50.201 31794.172 - 32004.729: 99.4118% ( 6) 00:09:50.201 38110.895 - 38321.452: 99.4577% ( 5) 00:09:50.201 38321.452 - 38532.010: 99.5312% ( 8) 00:09:50.201 38532.010 - 38742.567: 99.5956% ( 7) 00:09:50.201 38742.567 - 38953.124: 99.6783% ( 9) 00:09:50.201 38953.124 - 39163.682: 99.7518% ( 8) 00:09:50.201 39163.682 - 39374.239: 99.8162% ( 7) 00:09:50.201 39374.239 - 39584.797: 99.8805% ( 7) 00:09:50.201 39584.797 - 39795.354: 99.9540% ( 8) 00:09:50.201 39795.354 - 40005.912: 100.0000% ( 5) 00:09:50.201 00:09:50.201 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:50.201 ============================================================================== 00:09:50.201 Range in us Cumulative IO count 00:09:50.201 8422.297 - 8474.937: 0.0092% ( 1) 00:09:50.201 8474.937 - 8527.576: 0.0184% ( 1) 00:09:50.201 8632.855 - 8685.494: 0.1103% ( 10) 00:09:50.201 8685.494 - 8738.133: 0.2665% ( 17) 00:09:50.201 8738.133 - 8790.773: 0.4412% ( 19) 00:09:50.201 8790.773 - 8843.412: 0.7721% ( 36) 00:09:50.201 8843.412 - 8896.051: 1.0846% ( 34) 00:09:50.201 8896.051 - 8948.691: 1.5993% ( 56) 00:09:50.201 8948.691 - 9001.330: 2.2702% ( 73) 00:09:50.201 9001.330 - 9053.969: 2.9596% ( 75) 00:09:50.201 9053.969 - 9106.609: 3.7592% ( 87) 00:09:50.201 9106.609 - 9159.248: 5.0368% ( 139) 00:09:50.201 9159.248 - 9211.888: 6.3603% ( 144) 00:09:50.201 9211.888 - 9264.527: 7.9136% ( 169) 00:09:50.201 9264.527 - 9317.166: 9.2463% ( 145) 00:09:50.201 9317.166 - 9369.806: 10.9835% ( 189) 00:09:50.201 9369.806 - 9422.445: 12.5460% ( 170) 00:09:50.201 9422.445 - 9475.084: 14.1820% ( 178) 00:09:50.201 9475.084 - 9527.724: 16.2592% ( 226) 00:09:50.201 9527.724 - 9580.363: 18.6673% ( 262) 00:09:50.201 9580.363 - 9633.002: 21.2868% ( 285) 00:09:50.201 9633.002 - 9685.642: 23.8235% ( 276) 00:09:50.201 9685.642 - 9738.281: 26.4522% ( 286) 00:09:50.201 9738.281 - 9790.920: 29.4210% ( 323) 00:09:50.201 9790.920 - 9843.560: 31.8382% ( 263) 00:09:50.201 9843.560 - 9896.199: 34.3382% ( 272) 00:09:50.201 9896.199 - 9948.839: 36.0110% ( 182) 00:09:50.201 9948.839 - 10001.478: 37.4632% ( 158) 00:09:50.201 10001.478 - 10054.117: 39.0533% ( 173) 00:09:50.201 10054.117 - 10106.757: 40.4779% ( 155) 00:09:50.201 10106.757 - 10159.396: 41.6085% ( 123) 00:09:50.201 10159.396 - 10212.035: 42.7665% ( 126) 00:09:50.201 10212.035 - 10264.675: 43.6765% ( 99) 00:09:50.201 10264.675 - 10317.314: 44.9265% ( 136) 00:09:50.201 10317.314 - 10369.953: 46.3787% ( 158) 00:09:50.201 10369.953 - 10422.593: 47.4632% ( 118) 00:09:50.201 10422.593 - 10475.232: 49.0533% ( 173) 00:09:50.201 10475.232 - 10527.871: 50.2757% ( 133) 00:09:50.201 10527.871 - 10580.511: 51.5074% ( 134) 00:09:50.201 10580.511 - 10633.150: 53.4375% ( 210) 00:09:50.201 10633.150 - 10685.790: 54.5772% ( 124) 00:09:50.201 10685.790 - 10738.429: 55.5607% ( 107) 00:09:50.201 10738.429 - 10791.068: 56.8382% ( 139) 00:09:50.201 10791.068 - 10843.708: 57.6011% ( 83) 00:09:50.201 10843.708 - 10896.347: 58.2537% ( 71) 00:09:50.201 10896.347 - 10948.986: 58.9246% ( 73) 00:09:50.201 10948.986 - 11001.626: 59.4210% ( 54) 00:09:50.201 11001.626 - 11054.265: 60.0092% ( 64) 00:09:50.202 11054.265 - 11106.904: 60.7721% ( 83) 00:09:50.202 11106.904 - 11159.544: 61.2592% ( 53) 00:09:50.202 11159.544 - 11212.183: 61.6728% ( 45) 00:09:50.202 11212.183 - 11264.822: 61.9301% ( 28) 00:09:50.202 11264.822 - 11317.462: 62.2151% ( 31) 00:09:50.202 11317.462 - 11370.101: 62.5460% ( 36) 00:09:50.202 11370.101 - 11422.741: 62.9412% ( 43) 00:09:50.202 11422.741 - 11475.380: 63.3640% ( 46) 00:09:50.202 11475.380 - 11528.019: 63.8143% ( 49) 00:09:50.202 11528.019 - 11580.659: 64.2923% ( 52) 00:09:50.202 11580.659 - 11633.298: 64.7243% ( 47) 00:09:50.202 11633.298 - 11685.937: 65.2757% ( 60) 00:09:50.202 11685.937 - 11738.577: 65.8456% ( 62) 00:09:50.202 11738.577 - 11791.216: 66.6728% ( 90) 00:09:50.202 11791.216 - 11843.855: 67.7482% ( 117) 00:09:50.202 11843.855 - 11896.495: 68.5202% ( 84) 00:09:50.202 11896.495 - 11949.134: 69.1268% ( 66) 00:09:50.202 11949.134 - 12001.773: 69.5588% ( 47) 00:09:50.202 12001.773 - 12054.413: 69.9265% ( 40) 00:09:50.202 12054.413 - 12107.052: 70.4963% ( 62) 00:09:50.202 12107.052 - 12159.692: 70.9926% ( 54) 00:09:50.202 12159.692 - 12212.331: 71.3603% ( 40) 00:09:50.202 12212.331 - 12264.970: 71.7555% ( 43) 00:09:50.202 12264.970 - 12317.610: 72.2059% ( 49) 00:09:50.202 12317.610 - 12370.249: 72.5919% ( 42) 00:09:50.202 12370.249 - 12422.888: 73.0515% ( 50) 00:09:50.202 12422.888 - 12475.528: 73.3824% ( 36) 00:09:50.202 12475.528 - 12528.167: 73.5846% ( 22) 00:09:50.202 12528.167 - 12580.806: 73.8051% ( 24) 00:09:50.202 12580.806 - 12633.446: 73.9154% ( 12) 00:09:50.202 12633.446 - 12686.085: 74.0074% ( 10) 00:09:50.202 12686.085 - 12738.724: 74.0809% ( 8) 00:09:50.202 12738.724 - 12791.364: 74.1360% ( 6) 00:09:50.202 12791.364 - 12844.003: 74.2371% ( 11) 00:09:50.202 12844.003 - 12896.643: 74.3658% ( 14) 00:09:50.202 12896.643 - 12949.282: 74.6599% ( 32) 00:09:50.202 12949.282 - 13001.921: 74.8805% ( 24) 00:09:50.202 13001.921 - 13054.561: 75.1379% ( 28) 00:09:50.202 13054.561 - 13107.200: 75.6158% ( 52) 00:09:50.202 13107.200 - 13159.839: 76.0110% ( 43) 00:09:50.202 13159.839 - 13212.479: 76.4890% ( 52) 00:09:50.202 13212.479 - 13265.118: 77.0037% ( 56) 00:09:50.202 13265.118 - 13317.757: 77.2426% ( 26) 00:09:50.202 13317.757 - 13370.397: 77.5184% ( 30) 00:09:50.202 13370.397 - 13423.036: 77.8309% ( 34) 00:09:50.202 13423.036 - 13475.676: 78.2077% ( 41) 00:09:50.202 13475.676 - 13580.954: 78.7960% ( 64) 00:09:50.202 13580.954 - 13686.233: 79.5404% ( 81) 00:09:50.202 13686.233 - 13791.512: 80.0000% ( 50) 00:09:50.202 13791.512 - 13896.790: 80.2757% ( 30) 00:09:50.202 13896.790 - 14002.069: 80.6434% ( 40) 00:09:50.202 14002.069 - 14107.348: 81.0662% ( 46) 00:09:50.202 14107.348 - 14212.627: 81.5441% ( 52) 00:09:50.202 14212.627 - 14317.905: 82.0037% ( 50) 00:09:50.202 14317.905 - 14423.184: 82.3805% ( 41) 00:09:50.202 14423.184 - 14528.463: 82.6562% ( 30) 00:09:50.202 14528.463 - 14633.741: 82.9136% ( 28) 00:09:50.202 14633.741 - 14739.020: 83.4191% ( 55) 00:09:50.202 14739.020 - 14844.299: 83.9338% ( 56) 00:09:50.202 14844.299 - 14949.578: 84.6140% ( 74) 00:09:50.202 14949.578 - 15054.856: 85.3309% ( 78) 00:09:50.202 15054.856 - 15160.135: 86.1857% ( 93) 00:09:50.202 15160.135 - 15265.414: 86.9761% ( 86) 00:09:50.202 15265.414 - 15370.692: 87.4449% ( 51) 00:09:50.202 15370.692 - 15475.971: 87.9871% ( 59) 00:09:50.202 15475.971 - 15581.250: 88.5018% ( 56) 00:09:50.202 15581.250 - 15686.529: 89.0717% ( 62) 00:09:50.202 15686.529 - 15791.807: 89.8529% ( 85) 00:09:50.202 15791.807 - 15897.086: 90.6710% ( 89) 00:09:50.202 15897.086 - 16002.365: 91.2684% ( 65) 00:09:50.202 16002.365 - 16107.643: 91.8934% ( 68) 00:09:50.202 16107.643 - 16212.922: 92.1967% ( 33) 00:09:50.202 16212.922 - 16318.201: 92.4357% ( 26) 00:09:50.202 16318.201 - 16423.480: 92.7757% ( 37) 00:09:50.202 16423.480 - 16528.758: 93.1434% ( 40) 00:09:50.202 16528.758 - 16634.037: 93.6121% ( 51) 00:09:50.202 16634.037 - 16739.316: 94.0074% ( 43) 00:09:50.202 16739.316 - 16844.594: 94.2555% ( 27) 00:09:50.202 16844.594 - 16949.873: 94.4761% ( 24) 00:09:50.202 16949.873 - 17055.152: 94.6048% ( 14) 00:09:50.202 17055.152 - 17160.431: 94.6967% ( 10) 00:09:50.202 17160.431 - 17265.709: 94.8070% ( 12) 00:09:50.202 17265.709 - 17370.988: 94.9357% ( 14) 00:09:50.202 17370.988 - 17476.267: 95.0460% ( 12) 00:09:50.202 17476.267 - 17581.545: 95.1562% ( 12) 00:09:50.202 17581.545 - 17686.824: 95.2757% ( 13) 00:09:50.202 17686.824 - 17792.103: 95.4228% ( 16) 00:09:50.202 17792.103 - 17897.382: 95.6893% ( 29) 00:09:50.202 17897.382 - 18002.660: 95.9559% ( 29) 00:09:50.202 18002.660 - 18107.939: 96.3787% ( 46) 00:09:50.202 18107.939 - 18213.218: 96.7004% ( 35) 00:09:50.202 18213.218 - 18318.496: 96.9761% ( 30) 00:09:50.202 18318.496 - 18423.775: 97.1875% ( 23) 00:09:50.202 18423.775 - 18529.054: 97.4265% ( 26) 00:09:50.202 18529.054 - 18634.333: 97.6379% ( 23) 00:09:50.202 18634.333 - 18739.611: 97.8768% ( 26) 00:09:50.202 18739.611 - 18844.890: 98.1250% ( 27) 00:09:50.202 18844.890 - 18950.169: 98.3272% ( 22) 00:09:50.202 18950.169 - 19055.447: 98.5110% ( 20) 00:09:50.202 19055.447 - 19160.726: 98.6397% ( 14) 00:09:50.202 19160.726 - 19266.005: 98.7224% ( 9) 00:09:50.202 19266.005 - 19371.284: 98.7776% ( 6) 00:09:50.202 19371.284 - 19476.562: 98.8235% ( 5) 00:09:50.202 29478.040 - 29688.598: 98.8603% ( 4) 00:09:50.202 29688.598 - 29899.155: 98.9338% ( 8) 00:09:50.202 29899.155 - 30109.712: 99.0074% ( 8) 00:09:50.202 30109.712 - 30320.270: 99.0717% ( 7) 00:09:50.202 30320.270 - 30530.827: 99.1544% ( 9) 00:09:50.202 30530.827 - 30741.385: 99.2279% ( 8) 00:09:50.202 30741.385 - 30951.942: 99.2923% ( 7) 00:09:50.202 30951.942 - 31162.500: 99.3658% ( 8) 00:09:50.202 31162.500 - 31373.057: 99.4118% ( 5) 00:09:50.202 37268.665 - 37479.222: 99.4945% ( 9) 00:09:50.202 37479.222 - 37689.780: 99.5588% ( 7) 00:09:50.202 37689.780 - 37900.337: 99.6232% ( 7) 00:09:50.202 37900.337 - 38110.895: 99.7059% ( 9) 00:09:50.202 38110.895 - 38321.452: 99.7702% ( 7) 00:09:50.202 38321.452 - 38532.010: 99.8438% ( 8) 00:09:50.202 38532.010 - 38742.567: 99.9173% ( 8) 00:09:50.202 38742.567 - 38953.124: 99.9908% ( 8) 00:09:50.202 38953.124 - 39163.682: 100.0000% ( 1) 00:09:50.202 00:09:50.202 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:50.202 ============================================================================== 00:09:50.202 Range in us Cumulative IO count 00:09:50.202 8422.297 - 8474.937: 0.0368% ( 4) 00:09:50.202 8474.937 - 8527.576: 0.0827% ( 5) 00:09:50.202 8527.576 - 8580.215: 0.1654% ( 9) 00:09:50.202 8580.215 - 8632.855: 0.3401% ( 19) 00:09:50.202 8632.855 - 8685.494: 0.6066% ( 29) 00:09:50.202 8685.494 - 8738.133: 0.7721% ( 18) 00:09:50.202 8738.133 - 8790.773: 0.9835% ( 23) 00:09:50.202 8790.773 - 8843.412: 1.2040% ( 24) 00:09:50.202 8843.412 - 8896.051: 1.4338% ( 25) 00:09:50.202 8896.051 - 8948.691: 1.8199% ( 42) 00:09:50.202 8948.691 - 9001.330: 2.1599% ( 37) 00:09:50.202 9001.330 - 9053.969: 2.5184% ( 39) 00:09:50.202 9053.969 - 9106.609: 3.2721% ( 82) 00:09:50.202 9106.609 - 9159.248: 4.2096% ( 102) 00:09:50.202 9159.248 - 9211.888: 5.7169% ( 164) 00:09:50.202 9211.888 - 9264.527: 7.4173% ( 185) 00:09:50.202 9264.527 - 9317.166: 9.1728% ( 191) 00:09:50.202 9317.166 - 9369.806: 11.1029% ( 210) 00:09:50.202 9369.806 - 9422.445: 13.0607% ( 213) 00:09:50.202 9422.445 - 9475.084: 15.1838% ( 231) 00:09:50.202 9475.084 - 9527.724: 17.2886% ( 229) 00:09:50.202 9527.724 - 9580.363: 19.8070% ( 274) 00:09:50.202 9580.363 - 9633.002: 22.1691% ( 257) 00:09:50.202 9633.002 - 9685.642: 24.8438% ( 291) 00:09:50.202 9685.642 - 9738.281: 27.3989% ( 278) 00:09:50.202 9738.281 - 9790.920: 29.6783% ( 248) 00:09:50.202 9790.920 - 9843.560: 31.8934% ( 241) 00:09:50.202 9843.560 - 9896.199: 33.8143% ( 209) 00:09:50.202 9896.199 - 9948.839: 35.5974% ( 194) 00:09:50.202 9948.839 - 10001.478: 37.1507% ( 169) 00:09:50.202 10001.478 - 10054.117: 38.7500% ( 174) 00:09:50.202 10054.117 - 10106.757: 40.2482% ( 163) 00:09:50.202 10106.757 - 10159.396: 41.3603% ( 121) 00:09:50.202 10159.396 - 10212.035: 42.3713% ( 110) 00:09:50.202 10212.035 - 10264.675: 43.2996% ( 101) 00:09:50.202 10264.675 - 10317.314: 44.3382% ( 113) 00:09:50.202 10317.314 - 10369.953: 45.4871% ( 125) 00:09:50.202 10369.953 - 10422.593: 46.8842% ( 152) 00:09:50.202 10422.593 - 10475.232: 48.4651% ( 172) 00:09:50.202 10475.232 - 10527.871: 50.3493% ( 205) 00:09:50.202 10527.871 - 10580.511: 51.9485% ( 174) 00:09:50.202 10580.511 - 10633.150: 53.3824% ( 156) 00:09:50.202 10633.150 - 10685.790: 54.4393% ( 115) 00:09:50.202 10685.790 - 10738.429: 55.5055% ( 116) 00:09:50.202 10738.429 - 10791.068: 56.5533% ( 114) 00:09:50.202 10791.068 - 10843.708: 57.2610% ( 77) 00:09:50.202 10843.708 - 10896.347: 57.9779% ( 78) 00:09:50.202 10896.347 - 10948.986: 58.8051% ( 90) 00:09:50.202 10948.986 - 11001.626: 59.5312% ( 79) 00:09:50.202 11001.626 - 11054.265: 60.1746% ( 70) 00:09:50.202 11054.265 - 11106.904: 60.7077% ( 58) 00:09:50.202 11106.904 - 11159.544: 61.0570% ( 38) 00:09:50.202 11159.544 - 11212.183: 61.3695% ( 34) 00:09:50.202 11212.183 - 11264.822: 61.7004% ( 36) 00:09:50.202 11264.822 - 11317.462: 61.9761% ( 30) 00:09:50.202 11317.462 - 11370.101: 62.1599% ( 20) 00:09:50.202 11370.101 - 11422.741: 62.6195% ( 50) 00:09:50.202 11422.741 - 11475.380: 63.2169% ( 65) 00:09:50.202 11475.380 - 11528.019: 63.8695% ( 71) 00:09:50.202 11528.019 - 11580.659: 64.4210% ( 60) 00:09:50.202 11580.659 - 11633.298: 65.0000% ( 63) 00:09:50.202 11633.298 - 11685.937: 65.7169% ( 78) 00:09:50.202 11685.937 - 11738.577: 66.3143% ( 65) 00:09:50.202 11738.577 - 11791.216: 66.6636% ( 38) 00:09:50.202 11791.216 - 11843.855: 67.0129% ( 38) 00:09:50.203 11843.855 - 11896.495: 67.4632% ( 49) 00:09:50.203 11896.495 - 11949.134: 67.8768% ( 45) 00:09:50.203 11949.134 - 12001.773: 68.2904% ( 45) 00:09:50.203 12001.773 - 12054.413: 68.6489% ( 39) 00:09:50.203 12054.413 - 12107.052: 69.0717% ( 46) 00:09:50.203 12107.052 - 12159.692: 69.4301% ( 39) 00:09:50.203 12159.692 - 12212.331: 69.7426% ( 34) 00:09:50.203 12212.331 - 12264.970: 70.0643% ( 35) 00:09:50.203 12264.970 - 12317.610: 70.4779% ( 45) 00:09:50.203 12317.610 - 12370.249: 70.8456% ( 40) 00:09:50.203 12370.249 - 12422.888: 71.1121% ( 29) 00:09:50.203 12422.888 - 12475.528: 71.4338% ( 35) 00:09:50.203 12475.528 - 12528.167: 71.7279% ( 32) 00:09:50.203 12528.167 - 12580.806: 72.1599% ( 47) 00:09:50.203 12580.806 - 12633.446: 72.6930% ( 58) 00:09:50.203 12633.446 - 12686.085: 73.3272% ( 69) 00:09:50.203 12686.085 - 12738.724: 73.8419% ( 56) 00:09:50.203 12738.724 - 12791.364: 74.3382% ( 54) 00:09:50.203 12791.364 - 12844.003: 74.8897% ( 60) 00:09:50.203 12844.003 - 12896.643: 75.3768% ( 53) 00:09:50.203 12896.643 - 12949.282: 75.8272% ( 49) 00:09:50.203 12949.282 - 13001.921: 76.2776% ( 49) 00:09:50.203 13001.921 - 13054.561: 76.7371% ( 50) 00:09:50.203 13054.561 - 13107.200: 77.1507% ( 45) 00:09:50.203 13107.200 - 13159.839: 77.5643% ( 45) 00:09:50.203 13159.839 - 13212.479: 77.8125% ( 27) 00:09:50.203 13212.479 - 13265.118: 78.0699% ( 28) 00:09:50.203 13265.118 - 13317.757: 78.3548% ( 31) 00:09:50.203 13317.757 - 13370.397: 78.5938% ( 26) 00:09:50.203 13370.397 - 13423.036: 78.8235% ( 25) 00:09:50.203 13423.036 - 13475.676: 79.0993% ( 30) 00:09:50.203 13475.676 - 13580.954: 79.3934% ( 32) 00:09:50.203 13580.954 - 13686.233: 79.6324% ( 26) 00:09:50.203 13686.233 - 13791.512: 80.0368% ( 44) 00:09:50.203 13791.512 - 13896.790: 80.4779% ( 48) 00:09:50.203 13896.790 - 14002.069: 80.7904% ( 34) 00:09:50.203 14002.069 - 14107.348: 81.3419% ( 60) 00:09:50.203 14107.348 - 14212.627: 81.6176% ( 30) 00:09:50.203 14212.627 - 14317.905: 81.8934% ( 30) 00:09:50.203 14317.905 - 14423.184: 82.2243% ( 36) 00:09:50.203 14423.184 - 14528.463: 82.6746% ( 49) 00:09:50.203 14528.463 - 14633.741: 83.0790% ( 44) 00:09:50.203 14633.741 - 14739.020: 83.5294% ( 49) 00:09:50.203 14739.020 - 14844.299: 83.8419% ( 34) 00:09:50.203 14844.299 - 14949.578: 84.2555% ( 45) 00:09:50.203 14949.578 - 15054.856: 84.8438% ( 64) 00:09:50.203 15054.856 - 15160.135: 85.7261% ( 96) 00:09:50.203 15160.135 - 15265.414: 86.4982% ( 84) 00:09:50.203 15265.414 - 15370.692: 87.2794% ( 85) 00:09:50.203 15370.692 - 15475.971: 87.9596% ( 74) 00:09:50.203 15475.971 - 15581.250: 88.6305% ( 73) 00:09:50.203 15581.250 - 15686.529: 89.0993% ( 51) 00:09:50.203 15686.529 - 15791.807: 89.6140% ( 56) 00:09:50.203 15791.807 - 15897.086: 90.1930% ( 63) 00:09:50.203 15897.086 - 16002.365: 90.9007% ( 77) 00:09:50.203 16002.365 - 16107.643: 91.3511% ( 49) 00:09:50.203 16107.643 - 16212.922: 91.8842% ( 58) 00:09:50.203 16212.922 - 16318.201: 92.3438% ( 50) 00:09:50.203 16318.201 - 16423.480: 92.8493% ( 55) 00:09:50.203 16423.480 - 16528.758: 93.0974% ( 27) 00:09:50.203 16528.758 - 16634.037: 93.3824% ( 31) 00:09:50.203 16634.037 - 16739.316: 93.6949% ( 34) 00:09:50.203 16739.316 - 16844.594: 93.8971% ( 22) 00:09:50.203 16844.594 - 16949.873: 94.0809% ( 20) 00:09:50.203 16949.873 - 17055.152: 94.3566% ( 30) 00:09:50.203 17055.152 - 17160.431: 94.5956% ( 26) 00:09:50.203 17160.431 - 17265.709: 94.9173% ( 35) 00:09:50.203 17265.709 - 17370.988: 95.2574% ( 37) 00:09:50.203 17370.988 - 17476.267: 95.5699% ( 34) 00:09:50.203 17476.267 - 17581.545: 95.9375% ( 40) 00:09:50.203 17581.545 - 17686.824: 96.1581% ( 24) 00:09:50.203 17686.824 - 17792.103: 96.3419% ( 20) 00:09:50.203 17792.103 - 17897.382: 96.5165% ( 19) 00:09:50.203 17897.382 - 18002.660: 96.7371% ( 24) 00:09:50.203 18002.660 - 18107.939: 96.9853% ( 27) 00:09:50.203 18107.939 - 18213.218: 97.1783% ( 21) 00:09:50.203 18213.218 - 18318.496: 97.3162% ( 15) 00:09:50.203 18318.496 - 18423.775: 97.4908% ( 19) 00:09:50.203 18423.775 - 18529.054: 97.6838% ( 21) 00:09:50.203 18529.054 - 18634.333: 97.8493% ( 18) 00:09:50.203 18634.333 - 18739.611: 98.0331% ( 20) 00:09:50.203 18739.611 - 18844.890: 98.0974% ( 7) 00:09:50.203 18844.890 - 18950.169: 98.1985% ( 11) 00:09:50.203 18950.169 - 19055.447: 98.2996% ( 11) 00:09:50.203 19055.447 - 19160.726: 98.3915% ( 10) 00:09:50.203 19160.726 - 19266.005: 98.4559% ( 7) 00:09:50.203 19266.005 - 19371.284: 98.5110% ( 6) 00:09:50.203 19371.284 - 19476.562: 98.5570% ( 5) 00:09:50.203 19476.562 - 19581.841: 98.6121% ( 6) 00:09:50.203 19581.841 - 19687.120: 98.6673% ( 6) 00:09:50.203 19687.120 - 19792.398: 98.7224% ( 6) 00:09:50.203 19792.398 - 19897.677: 98.7776% ( 6) 00:09:50.203 19897.677 - 20002.956: 98.8235% ( 5) 00:09:50.203 27793.581 - 28004.138: 98.8879% ( 7) 00:09:50.203 28004.138 - 28214.696: 98.9614% ( 8) 00:09:50.203 28214.696 - 28425.253: 99.0257% ( 7) 00:09:50.203 28425.253 - 28635.810: 99.1085% ( 9) 00:09:50.203 28635.810 - 28846.368: 99.1820% ( 8) 00:09:50.203 28846.368 - 29056.925: 99.2647% ( 9) 00:09:50.203 29056.925 - 29267.483: 99.3382% ( 8) 00:09:50.203 29267.483 - 29478.040: 99.4118% ( 8) 00:09:50.203 35373.648 - 35584.206: 99.4393% ( 3) 00:09:50.203 35584.206 - 35794.763: 99.5037% ( 7) 00:09:50.203 35794.763 - 36005.320: 99.5772% ( 8) 00:09:50.203 36005.320 - 36215.878: 99.6507% ( 8) 00:09:50.203 36215.878 - 36426.435: 99.7335% ( 9) 00:09:50.203 36426.435 - 36636.993: 99.7978% ( 7) 00:09:50.203 36636.993 - 36847.550: 99.8621% ( 7) 00:09:50.203 36847.550 - 37058.108: 99.9357% ( 8) 00:09:50.203 37058.108 - 37268.665: 100.0000% ( 7) 00:09:50.203 00:09:50.203 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:50.203 ============================================================================== 00:09:50.203 Range in us Cumulative IO count 00:09:50.203 8527.576 - 8580.215: 0.0092% ( 1) 00:09:50.203 8632.855 - 8685.494: 0.0368% ( 3) 00:09:50.203 8685.494 - 8738.133: 0.0735% ( 4) 00:09:50.203 8738.133 - 8790.773: 0.1838% ( 12) 00:09:50.203 8790.773 - 8843.412: 0.3768% ( 21) 00:09:50.203 8843.412 - 8896.051: 0.6342% ( 28) 00:09:50.203 8896.051 - 8948.691: 1.1213% ( 53) 00:09:50.203 8948.691 - 9001.330: 1.8842% ( 83) 00:09:50.203 9001.330 - 9053.969: 2.7022% ( 89) 00:09:50.203 9053.969 - 9106.609: 3.7684% ( 116) 00:09:50.203 9106.609 - 9159.248: 5.2757% ( 164) 00:09:50.203 9159.248 - 9211.888: 7.1140% ( 200) 00:09:50.203 9211.888 - 9264.527: 8.5386% ( 155) 00:09:50.203 9264.527 - 9317.166: 10.0276% ( 162) 00:09:50.203 9317.166 - 9369.806: 11.5809% ( 169) 00:09:50.203 9369.806 - 9422.445: 13.0882% ( 164) 00:09:50.203 9422.445 - 9475.084: 15.0000% ( 208) 00:09:50.203 9475.084 - 9527.724: 17.3989% ( 261) 00:09:50.203 9527.724 - 9580.363: 19.5037% ( 229) 00:09:50.203 9580.363 - 9633.002: 22.0221% ( 274) 00:09:50.203 9633.002 - 9685.642: 24.1360% ( 230) 00:09:50.203 9685.642 - 9738.281: 26.2500% ( 230) 00:09:50.203 9738.281 - 9790.920: 28.4651% ( 241) 00:09:50.203 9790.920 - 9843.560: 30.5147% ( 223) 00:09:50.203 9843.560 - 9896.199: 32.2978% ( 194) 00:09:50.203 9896.199 - 9948.839: 34.3107% ( 219) 00:09:50.203 9948.839 - 10001.478: 36.4246% ( 230) 00:09:50.203 10001.478 - 10054.117: 38.0790% ( 180) 00:09:50.203 10054.117 - 10106.757: 39.4210% ( 146) 00:09:50.203 10106.757 - 10159.396: 40.9283% ( 164) 00:09:50.203 10159.396 - 10212.035: 42.4173% ( 162) 00:09:50.203 10212.035 - 10264.675: 43.7592% ( 146) 00:09:50.203 10264.675 - 10317.314: 45.0092% ( 136) 00:09:50.203 10317.314 - 10369.953: 46.7739% ( 192) 00:09:50.203 10369.953 - 10422.593: 48.2261% ( 158) 00:09:50.203 10422.593 - 10475.232: 49.4853% ( 137) 00:09:50.203 10475.232 - 10527.871: 50.6893% ( 131) 00:09:50.203 10527.871 - 10580.511: 51.8474% ( 126) 00:09:50.203 10580.511 - 10633.150: 53.1158% ( 138) 00:09:50.203 10633.150 - 10685.790: 54.2739% ( 126) 00:09:50.203 10685.790 - 10738.429: 55.4779% ( 131) 00:09:50.203 10738.429 - 10791.068: 56.7371% ( 137) 00:09:50.203 10791.068 - 10843.708: 57.5000% ( 83) 00:09:50.203 10843.708 - 10896.347: 58.2169% ( 78) 00:09:50.203 10896.347 - 10948.986: 58.7408% ( 57) 00:09:50.203 10948.986 - 11001.626: 59.2096% ( 51) 00:09:50.203 11001.626 - 11054.265: 59.6783% ( 51) 00:09:50.203 11054.265 - 11106.904: 60.1930% ( 56) 00:09:50.203 11106.904 - 11159.544: 60.8272% ( 69) 00:09:50.203 11159.544 - 11212.183: 61.1765% ( 38) 00:09:50.203 11212.183 - 11264.822: 61.6085% ( 47) 00:09:50.203 11264.822 - 11317.462: 61.9669% ( 39) 00:09:50.203 11317.462 - 11370.101: 62.3805% ( 45) 00:09:50.203 11370.101 - 11422.741: 62.8493% ( 51) 00:09:50.203 11422.741 - 11475.380: 63.7224% ( 95) 00:09:50.203 11475.380 - 11528.019: 64.4393% ( 78) 00:09:50.203 11528.019 - 11580.659: 65.0551% ( 67) 00:09:50.204 11580.659 - 11633.298: 65.3860% ( 36) 00:09:50.204 11633.298 - 11685.937: 65.7904% ( 44) 00:09:50.204 11685.937 - 11738.577: 66.0938% ( 33) 00:09:50.204 11738.577 - 11791.216: 66.3051% ( 23) 00:09:50.204 11791.216 - 11843.855: 66.5533% ( 27) 00:09:50.204 11843.855 - 11896.495: 66.7831% ( 25) 00:09:50.204 11896.495 - 11949.134: 67.1875% ( 44) 00:09:50.204 11949.134 - 12001.773: 67.4816% ( 32) 00:09:50.204 12001.773 - 12054.413: 67.7941% ( 34) 00:09:50.204 12054.413 - 12107.052: 68.1710% ( 41) 00:09:50.204 12107.052 - 12159.692: 68.4375% ( 29) 00:09:50.204 12159.692 - 12212.331: 68.7776% ( 37) 00:09:50.204 12212.331 - 12264.970: 69.1636% ( 42) 00:09:50.204 12264.970 - 12317.610: 69.5404% ( 41) 00:09:50.204 12317.610 - 12370.249: 69.9357% ( 43) 00:09:50.204 12370.249 - 12422.888: 70.6158% ( 74) 00:09:50.204 12422.888 - 12475.528: 70.9926% ( 41) 00:09:50.204 12475.528 - 12528.167: 71.2132% ( 24) 00:09:50.204 12528.167 - 12580.806: 71.5349% ( 35) 00:09:50.204 12580.806 - 12633.446: 71.8842% ( 38) 00:09:50.204 12633.446 - 12686.085: 72.2702% ( 42) 00:09:50.204 12686.085 - 12738.724: 72.6287% ( 39) 00:09:50.204 12738.724 - 12791.364: 73.0239% ( 43) 00:09:50.204 12791.364 - 12844.003: 73.3915% ( 40) 00:09:50.204 12844.003 - 12896.643: 73.8143% ( 46) 00:09:50.204 12896.643 - 12949.282: 74.3750% ( 61) 00:09:50.204 12949.282 - 13001.921: 74.8438% ( 51) 00:09:50.204 13001.921 - 13054.561: 75.2482% ( 44) 00:09:50.204 13054.561 - 13107.200: 75.6434% ( 43) 00:09:50.204 13107.200 - 13159.839: 76.1213% ( 52) 00:09:50.204 13159.839 - 13212.479: 76.5074% ( 42) 00:09:50.204 13212.479 - 13265.118: 76.9669% ( 50) 00:09:50.204 13265.118 - 13317.757: 77.3162% ( 38) 00:09:50.204 13317.757 - 13370.397: 77.6746% ( 39) 00:09:50.204 13370.397 - 13423.036: 78.0699% ( 43) 00:09:50.204 13423.036 - 13475.676: 78.4283% ( 39) 00:09:50.204 13475.676 - 13580.954: 79.0809% ( 71) 00:09:50.204 13580.954 - 13686.233: 79.7886% ( 77) 00:09:50.204 13686.233 - 13791.512: 80.1654% ( 41) 00:09:50.204 13791.512 - 13896.790: 80.4871% ( 35) 00:09:50.204 13896.790 - 14002.069: 80.8088% ( 35) 00:09:50.204 14002.069 - 14107.348: 81.1213% ( 34) 00:09:50.204 14107.348 - 14212.627: 81.4982% ( 41) 00:09:50.204 14212.627 - 14317.905: 82.1691% ( 73) 00:09:50.204 14317.905 - 14423.184: 82.6287% ( 50) 00:09:50.204 14423.184 - 14528.463: 83.0331% ( 44) 00:09:50.204 14528.463 - 14633.741: 83.6949% ( 72) 00:09:50.204 14633.741 - 14739.020: 84.3842% ( 75) 00:09:50.204 14739.020 - 14844.299: 84.9816% ( 65) 00:09:50.204 14844.299 - 14949.578: 85.5882% ( 66) 00:09:50.204 14949.578 - 15054.856: 86.1029% ( 56) 00:09:50.204 15054.856 - 15160.135: 86.6912% ( 64) 00:09:50.204 15160.135 - 15265.414: 87.2886% ( 65) 00:09:50.204 15265.414 - 15370.692: 87.8217% ( 58) 00:09:50.204 15370.692 - 15475.971: 88.3824% ( 61) 00:09:50.204 15475.971 - 15581.250: 88.7776% ( 43) 00:09:50.204 15581.250 - 15686.529: 89.0533% ( 30) 00:09:50.204 15686.529 - 15791.807: 89.3842% ( 36) 00:09:50.204 15791.807 - 15897.086: 89.8621% ( 52) 00:09:50.204 15897.086 - 16002.365: 90.2298% ( 40) 00:09:50.204 16002.365 - 16107.643: 90.6618% ( 47) 00:09:50.204 16107.643 - 16212.922: 91.0478% ( 42) 00:09:50.204 16212.922 - 16318.201: 91.3879% ( 37) 00:09:50.204 16318.201 - 16423.480: 91.7923% ( 44) 00:09:50.204 16423.480 - 16528.758: 92.3805% ( 64) 00:09:50.204 16528.758 - 16634.037: 93.0331% ( 71) 00:09:50.204 16634.037 - 16739.316: 93.4651% ( 47) 00:09:50.204 16739.316 - 16844.594: 93.7500% ( 31) 00:09:50.204 16844.594 - 16949.873: 94.1452% ( 43) 00:09:50.204 16949.873 - 17055.152: 94.6691% ( 57) 00:09:50.204 17055.152 - 17160.431: 95.1838% ( 56) 00:09:50.204 17160.431 - 17265.709: 95.4871% ( 33) 00:09:50.204 17265.709 - 17370.988: 95.7169% ( 25) 00:09:50.204 17370.988 - 17476.267: 95.9191% ( 22) 00:09:50.204 17476.267 - 17581.545: 96.2224% ( 33) 00:09:50.204 17581.545 - 17686.824: 96.5257% ( 33) 00:09:50.204 17686.824 - 17792.103: 96.9761% ( 49) 00:09:50.204 17792.103 - 17897.382: 97.3070% ( 36) 00:09:50.204 17897.382 - 18002.660: 97.7206% ( 45) 00:09:50.204 18002.660 - 18107.939: 98.0515% ( 36) 00:09:50.204 18107.939 - 18213.218: 98.2445% ( 21) 00:09:50.204 18213.218 - 18318.496: 98.3456% ( 11) 00:09:50.204 18318.496 - 18423.775: 98.4559% ( 12) 00:09:50.204 18423.775 - 18529.054: 98.5570% ( 11) 00:09:50.204 18529.054 - 18634.333: 98.6489% ( 10) 00:09:50.204 18634.333 - 18739.611: 98.7040% ( 6) 00:09:50.204 18739.611 - 18844.890: 98.7408% ( 4) 00:09:50.204 18844.890 - 18950.169: 98.7868% ( 5) 00:09:50.204 18950.169 - 19055.447: 98.8235% ( 4) 00:09:50.204 26424.957 - 26530.236: 98.8419% ( 2) 00:09:50.204 26530.236 - 26635.515: 98.8787% ( 4) 00:09:50.204 26635.515 - 26740.794: 98.9154% ( 4) 00:09:50.204 26740.794 - 26846.072: 98.9614% ( 5) 00:09:50.204 26846.072 - 26951.351: 98.9982% ( 4) 00:09:50.204 26951.351 - 27161.908: 99.0717% ( 8) 00:09:50.204 27161.908 - 27372.466: 99.1452% ( 8) 00:09:50.204 27372.466 - 27583.023: 99.2188% ( 8) 00:09:50.204 27583.023 - 27793.581: 99.2831% ( 7) 00:09:50.204 27793.581 - 28004.138: 99.3566% ( 8) 00:09:50.204 28004.138 - 28214.696: 99.4118% ( 6) 00:09:50.204 33689.189 - 33899.746: 99.4577% ( 5) 00:09:50.204 33899.746 - 34110.304: 99.5221% ( 7) 00:09:50.204 34110.304 - 34320.861: 99.5864% ( 7) 00:09:50.204 34320.861 - 34531.418: 99.6507% ( 7) 00:09:50.204 34531.418 - 34741.976: 99.7151% ( 7) 00:09:50.204 34741.976 - 34952.533: 99.7886% ( 8) 00:09:50.204 34952.533 - 35163.091: 99.8621% ( 8) 00:09:50.204 35163.091 - 35373.648: 99.9265% ( 7) 00:09:50.204 35373.648 - 35584.206: 100.0000% ( 8) 00:09:50.204 00:09:50.204 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:50.204 ============================================================================== 00:09:50.204 Range in us Cumulative IO count 00:09:50.204 8369.658 - 8422.297: 0.0274% ( 3) 00:09:50.204 8422.297 - 8474.937: 0.0731% ( 5) 00:09:50.204 8474.937 - 8527.576: 0.1096% ( 4) 00:09:50.204 8527.576 - 8580.215: 0.1736% ( 7) 00:09:50.204 8580.215 - 8632.855: 0.2467% ( 8) 00:09:50.204 8632.855 - 8685.494: 0.4203% ( 19) 00:09:50.204 8685.494 - 8738.133: 0.5665% ( 16) 00:09:50.204 8738.133 - 8790.773: 0.7127% ( 16) 00:09:50.204 8790.773 - 8843.412: 0.9320% ( 24) 00:09:50.204 8843.412 - 8896.051: 1.2792% ( 38) 00:09:50.204 8896.051 - 8948.691: 1.7087% ( 47) 00:09:50.204 8948.691 - 9001.330: 2.4123% ( 77) 00:09:50.204 9001.330 - 9053.969: 3.1067% ( 76) 00:09:50.204 9053.969 - 9106.609: 3.9931% ( 97) 00:09:50.204 9106.609 - 9159.248: 4.8977% ( 99) 00:09:50.204 9159.248 - 9211.888: 6.1312% ( 135) 00:09:50.204 9211.888 - 9264.527: 7.3648% ( 135) 00:09:50.204 9264.527 - 9317.166: 8.8633% ( 164) 00:09:50.204 9317.166 - 9369.806: 10.5446% ( 184) 00:09:50.204 9369.806 - 9422.445: 12.5091% ( 215) 00:09:50.204 9422.445 - 9475.084: 14.6838% ( 238) 00:09:50.204 9475.084 - 9527.724: 16.8311% ( 235) 00:09:50.204 9527.724 - 9580.363: 18.9967% ( 237) 00:09:50.204 9580.363 - 9633.002: 20.8425% ( 202) 00:09:50.204 9633.002 - 9685.642: 22.6974% ( 203) 00:09:50.204 9685.642 - 9738.281: 24.6893% ( 218) 00:09:50.204 9738.281 - 9790.920: 26.9371% ( 246) 00:09:50.204 9790.920 - 9843.560: 29.2580% ( 254) 00:09:50.204 9843.560 - 9896.199: 31.3871% ( 233) 00:09:50.204 9896.199 - 9948.839: 33.6623% ( 249) 00:09:50.204 9948.839 - 10001.478: 35.4898% ( 200) 00:09:50.204 10001.478 - 10054.117: 37.8015% ( 253) 00:09:50.204 10054.117 - 10106.757: 39.6930% ( 207) 00:09:50.204 10106.757 - 10159.396: 40.9174% ( 134) 00:09:50.204 10159.396 - 10212.035: 42.2332% ( 144) 00:09:50.204 10212.035 - 10264.675: 43.5398% ( 143) 00:09:50.204 10264.675 - 10317.314: 44.6729% ( 124) 00:09:50.204 10317.314 - 10369.953: 45.8973% ( 134) 00:09:50.204 10369.953 - 10422.593: 47.0577% ( 127) 00:09:50.204 10422.593 - 10475.232: 48.6477% ( 174) 00:09:50.204 10475.232 - 10527.871: 50.3564% ( 187) 00:09:50.204 10527.871 - 10580.511: 52.1382% ( 195) 00:09:50.204 10580.511 - 10633.150: 53.8103% ( 183) 00:09:50.204 10633.150 - 10685.790: 55.1444% ( 146) 00:09:50.204 10685.790 - 10738.429: 56.1678% ( 112) 00:09:50.204 10738.429 - 10791.068: 57.1912% ( 112) 00:09:50.204 10791.068 - 10843.708: 58.0775% ( 97) 00:09:50.204 10843.708 - 10896.347: 58.8176% ( 81) 00:09:50.204 10896.347 - 10948.986: 59.2928% ( 52) 00:09:50.204 10948.986 - 11001.626: 59.9415% ( 71) 00:09:50.204 11001.626 - 11054.265: 60.4989% ( 61) 00:09:50.204 11054.265 - 11106.904: 60.7822% ( 31) 00:09:50.204 11106.904 - 11159.544: 61.1933% ( 45) 00:09:50.204 11159.544 - 11212.183: 61.7873% ( 65) 00:09:50.204 11212.183 - 11264.822: 62.1071% ( 35) 00:09:50.204 11264.822 - 11317.462: 62.4543% ( 38) 00:09:50.204 11317.462 - 11370.101: 62.9020% ( 49) 00:09:50.204 11370.101 - 11422.741: 63.5325% ( 69) 00:09:50.204 11422.741 - 11475.380: 63.9620% ( 47) 00:09:50.204 11475.380 - 11528.019: 64.2178% ( 28) 00:09:50.204 11528.019 - 11580.659: 64.4280% ( 23) 00:09:50.204 11580.659 - 11633.298: 64.8209% ( 43) 00:09:50.204 11633.298 - 11685.937: 65.1042% ( 31) 00:09:50.204 11685.937 - 11738.577: 65.3692% ( 29) 00:09:50.204 11738.577 - 11791.216: 65.5428% ( 19) 00:09:50.204 11791.216 - 11843.855: 65.7986% ( 28) 00:09:50.204 11843.855 - 11896.495: 66.1367% ( 37) 00:09:50.204 11896.495 - 11949.134: 66.3194% ( 20) 00:09:50.204 11949.134 - 12001.773: 66.6210% ( 33) 00:09:50.205 12001.773 - 12054.413: 66.9317% ( 34) 00:09:50.205 12054.413 - 12107.052: 67.2058% ( 30) 00:09:50.205 12107.052 - 12159.692: 67.6626% ( 50) 00:09:50.205 12159.692 - 12212.331: 68.1378% ( 52) 00:09:50.205 12212.331 - 12264.970: 68.7317% ( 65) 00:09:50.205 12264.970 - 12317.610: 69.1520% ( 46) 00:09:50.205 12317.610 - 12370.249: 69.6272% ( 52) 00:09:50.205 12370.249 - 12422.888: 70.0567% ( 47) 00:09:50.205 12422.888 - 12475.528: 70.7511% ( 76) 00:09:50.205 12475.528 - 12528.167: 71.2354% ( 53) 00:09:50.205 12528.167 - 12580.806: 71.7745% ( 59) 00:09:50.205 12580.806 - 12633.446: 72.1491% ( 41) 00:09:50.205 12633.446 - 12686.085: 72.5146% ( 40) 00:09:50.205 12686.085 - 12738.724: 73.0355% ( 57) 00:09:50.205 12738.724 - 12791.364: 73.5837% ( 60) 00:09:50.205 12791.364 - 12844.003: 73.9218% ( 37) 00:09:50.205 12844.003 - 12896.643: 74.2599% ( 37) 00:09:50.205 12896.643 - 12949.282: 74.4883% ( 25) 00:09:50.205 12949.282 - 13001.921: 74.7076% ( 24) 00:09:50.205 13001.921 - 13054.561: 74.8812% ( 19) 00:09:50.205 13054.561 - 13107.200: 75.1279% ( 27) 00:09:50.205 13107.200 - 13159.839: 75.3564% ( 25) 00:09:50.205 13159.839 - 13212.479: 75.6670% ( 34) 00:09:50.205 13212.479 - 13265.118: 76.1787% ( 56) 00:09:50.205 13265.118 - 13317.757: 76.3980% ( 24) 00:09:50.205 13317.757 - 13370.397: 76.6356% ( 26) 00:09:50.205 13370.397 - 13423.036: 76.8732% ( 26) 00:09:50.205 13423.036 - 13475.676: 77.0833% ( 23) 00:09:50.205 13475.676 - 13580.954: 77.5493% ( 51) 00:09:50.205 13580.954 - 13686.233: 78.1798% ( 69) 00:09:50.205 13686.233 - 13791.512: 78.7463% ( 62) 00:09:50.205 13791.512 - 13896.790: 79.4499% ( 77) 00:09:50.205 13896.790 - 14002.069: 80.4185% ( 106) 00:09:50.205 14002.069 - 14107.348: 81.2865% ( 95) 00:09:50.205 14107.348 - 14212.627: 81.9719% ( 75) 00:09:50.205 14212.627 - 14317.905: 82.8034% ( 91) 00:09:50.205 14317.905 - 14423.184: 83.5892% ( 86) 00:09:50.205 14423.184 - 14528.463: 84.3293% ( 81) 00:09:50.205 14528.463 - 14633.741: 85.0694% ( 81) 00:09:50.205 14633.741 - 14739.020: 85.7730% ( 77) 00:09:50.205 14739.020 - 14844.299: 86.1933% ( 46) 00:09:50.205 14844.299 - 14949.578: 86.7325% ( 59) 00:09:50.205 14949.578 - 15054.856: 87.2716% ( 59) 00:09:50.205 15054.856 - 15160.135: 87.6736% ( 44) 00:09:50.205 15160.135 - 15265.414: 88.1122% ( 48) 00:09:50.205 15265.414 - 15370.692: 88.6970% ( 64) 00:09:50.205 15370.692 - 15475.971: 89.2635% ( 62) 00:09:50.205 15475.971 - 15581.250: 89.7295% ( 51) 00:09:50.205 15581.250 - 15686.529: 90.0493% ( 35) 00:09:50.205 15686.529 - 15791.807: 90.3509% ( 33) 00:09:50.205 15791.807 - 15897.086: 90.5702% ( 24) 00:09:50.205 15897.086 - 16002.365: 90.7346% ( 18) 00:09:50.205 16002.365 - 16107.643: 90.8808% ( 16) 00:09:50.205 16107.643 - 16212.922: 91.0727% ( 21) 00:09:50.205 16212.922 - 16318.201: 91.4656% ( 43) 00:09:50.205 16318.201 - 16423.480: 91.6575% ( 21) 00:09:50.205 16423.480 - 16528.758: 91.9134% ( 28) 00:09:50.205 16528.758 - 16634.037: 92.2240% ( 34) 00:09:50.205 16634.037 - 16739.316: 92.5439% ( 35) 00:09:50.205 16739.316 - 16844.594: 93.2383% ( 76) 00:09:50.205 16844.594 - 16949.873: 93.8505% ( 67) 00:09:50.205 16949.873 - 17055.152: 94.2160% ( 40) 00:09:50.205 17055.152 - 17160.431: 94.5815% ( 40) 00:09:50.205 17160.431 - 17265.709: 94.8739% ( 32) 00:09:50.205 17265.709 - 17370.988: 95.2942% ( 46) 00:09:50.205 17370.988 - 17476.267: 95.8333% ( 59) 00:09:50.205 17476.267 - 17581.545: 96.2628% ( 47) 00:09:50.205 17581.545 - 17686.824: 96.6283% ( 40) 00:09:50.205 17686.824 - 17792.103: 97.1217% ( 54) 00:09:50.205 17792.103 - 17897.382: 97.4324% ( 34) 00:09:50.205 17897.382 - 18002.660: 97.6334% ( 22) 00:09:50.205 18002.660 - 18107.939: 97.8070% ( 19) 00:09:50.205 18107.939 - 18213.218: 97.9806% ( 19) 00:09:50.205 18213.218 - 18318.496: 98.0994% ( 13) 00:09:50.205 18318.496 - 18423.775: 98.1817% ( 9) 00:09:50.205 18423.775 - 18529.054: 98.2730% ( 10) 00:09:50.205 18529.054 - 18634.333: 98.3827% ( 12) 00:09:50.205 18634.333 - 18739.611: 98.5197% ( 15) 00:09:50.205 18739.611 - 18844.890: 98.7573% ( 26) 00:09:50.205 18844.890 - 18950.169: 98.8944% ( 15) 00:09:50.205 18950.169 - 19055.447: 98.9949% ( 11) 00:09:50.205 19055.447 - 19160.726: 99.0223% ( 3) 00:09:50.205 19160.726 - 19266.005: 99.0680% ( 5) 00:09:50.205 19266.005 - 19371.284: 99.1045% ( 4) 00:09:50.205 19371.284 - 19476.562: 99.1411% ( 4) 00:09:50.205 19476.562 - 19581.841: 99.1776% ( 4) 00:09:50.205 19581.841 - 19687.120: 99.2142% ( 4) 00:09:50.205 19687.120 - 19792.398: 99.2599% ( 5) 00:09:50.205 19792.398 - 19897.677: 99.2964% ( 4) 00:09:50.205 19897.677 - 20002.956: 99.3330% ( 4) 00:09:50.205 20002.956 - 20108.235: 99.3695% ( 4) 00:09:50.205 20108.235 - 20213.513: 99.4061% ( 4) 00:09:50.205 20213.513 - 20318.792: 99.4152% ( 1) 00:09:50.205 25898.564 - 26003.843: 99.4243% ( 1) 00:09:50.205 26003.843 - 26109.121: 99.4609% ( 4) 00:09:50.205 26109.121 - 26214.400: 99.4974% ( 4) 00:09:50.205 26214.400 - 26319.679: 99.5340% ( 4) 00:09:50.205 26319.679 - 26424.957: 99.5705% ( 4) 00:09:50.205 26424.957 - 26530.236: 99.6071% ( 4) 00:09:50.205 26530.236 - 26635.515: 99.6436% ( 4) 00:09:50.205 26635.515 - 26740.794: 99.6802% ( 4) 00:09:50.205 26740.794 - 26846.072: 99.7167% ( 4) 00:09:50.205 26846.072 - 26951.351: 99.7624% ( 5) 00:09:50.205 26951.351 - 27161.908: 99.8355% ( 8) 00:09:50.205 27161.908 - 27372.466: 99.9086% ( 8) 00:09:50.205 27372.466 - 27583.023: 99.9817% ( 8) 00:09:50.205 27583.023 - 27793.581: 100.0000% ( 2) 00:09:50.205 00:09:50.205 11:53:40 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:50.205 00:09:50.205 real 0m2.708s 00:09:50.205 user 0m2.285s 00:09:50.205 sys 0m0.304s 00:09:50.205 11:53:40 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.205 ************************************ 00:09:50.205 END TEST nvme_perf 00:09:50.205 ************************************ 00:09:50.205 11:53:40 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:50.205 11:53:40 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:50.205 11:53:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:50.205 11:53:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.205 11:53:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.205 ************************************ 00:09:50.205 START TEST nvme_hello_world 00:09:50.205 ************************************ 00:09:50.205 11:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:50.464 Initializing NVMe Controllers 00:09:50.464 Attached to 0000:00:10.0 00:09:50.464 Namespace ID: 1 size: 6GB 00:09:50.464 Attached to 0000:00:11.0 00:09:50.464 Namespace ID: 1 size: 5GB 00:09:50.464 Attached to 0000:00:13.0 00:09:50.464 Namespace ID: 1 size: 1GB 00:09:50.464 Attached to 0000:00:12.0 00:09:50.464 Namespace ID: 1 size: 4GB 00:09:50.464 Namespace ID: 2 size: 4GB 00:09:50.464 Namespace ID: 3 size: 4GB 00:09:50.464 Initialization complete. 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 INFO: using host memory buffer for IO 00:09:50.464 Hello world! 00:09:50.464 00:09:50.464 real 0m0.302s 00:09:50.464 user 0m0.125s 00:09:50.464 sys 0m0.134s 00:09:50.464 11:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.464 11:53:40 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 ************************************ 00:09:50.464 END TEST nvme_hello_world 00:09:50.464 ************************************ 00:09:50.464 11:53:40 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:50.464 11:53:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.464 11:53:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.464 11:53:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 ************************************ 00:09:50.464 START TEST nvme_sgl 00:09:50.464 ************************************ 00:09:50.464 11:53:40 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:50.724 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:50.724 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:50.724 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:50.724 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:50.724 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:50.724 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:50.724 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:50.724 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:50.724 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:50.984 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:50.984 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:50.984 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:50.984 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:50.984 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:50.984 NVMe Readv/Writev Request test 00:09:50.984 Attached to 0000:00:10.0 00:09:50.984 Attached to 0000:00:11.0 00:09:50.984 Attached to 0000:00:13.0 00:09:50.984 Attached to 0000:00:12.0 00:09:50.984 0000:00:10.0: build_io_request_2 test passed 00:09:50.984 0000:00:10.0: build_io_request_4 test passed 00:09:50.984 0000:00:10.0: build_io_request_5 test passed 00:09:50.984 0000:00:10.0: build_io_request_6 test passed 00:09:50.984 0000:00:10.0: build_io_request_7 test passed 00:09:50.984 0000:00:10.0: build_io_request_10 test passed 00:09:50.984 0000:00:11.0: build_io_request_2 test passed 00:09:50.984 0000:00:11.0: build_io_request_4 test passed 00:09:50.984 0000:00:11.0: build_io_request_5 test passed 00:09:50.984 0000:00:11.0: build_io_request_6 test passed 00:09:50.984 0000:00:11.0: build_io_request_7 test passed 00:09:50.984 0000:00:11.0: build_io_request_10 test passed 00:09:50.984 Cleaning up... 00:09:50.984 00:09:50.984 real 0m0.367s 00:09:50.984 user 0m0.174s 00:09:50.984 sys 0m0.141s 00:09:50.984 11:53:40 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.984 ************************************ 00:09:50.984 END TEST nvme_sgl 00:09:50.984 ************************************ 00:09:50.984 11:53:40 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:50.984 11:53:40 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:50.984 11:53:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.984 11:53:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.984 11:53:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.984 ************************************ 00:09:50.984 START TEST nvme_e2edp 00:09:50.984 ************************************ 00:09:50.984 11:53:40 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:51.249 NVMe Write/Read with End-to-End data protection test 00:09:51.249 Attached to 0000:00:10.0 00:09:51.249 Attached to 0000:00:11.0 00:09:51.249 Attached to 0000:00:13.0 00:09:51.249 Attached to 0000:00:12.0 00:09:51.249 Cleaning up... 00:09:51.249 00:09:51.249 real 0m0.304s 00:09:51.249 user 0m0.108s 00:09:51.249 sys 0m0.148s 00:09:51.249 11:53:41 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.249 11:53:41 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:51.249 ************************************ 00:09:51.249 END TEST nvme_e2edp 00:09:51.249 ************************************ 00:09:51.249 11:53:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:51.249 11:53:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.249 11:53:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.249 11:53:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.249 ************************************ 00:09:51.249 START TEST nvme_reserve 00:09:51.249 ************************************ 00:09:51.249 11:53:41 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:51.549 ===================================================== 00:09:51.549 NVMe Controller at PCI bus 0, device 16, function 0 00:09:51.549 ===================================================== 00:09:51.549 Reservations: Not Supported 00:09:51.549 ===================================================== 00:09:51.549 NVMe Controller at PCI bus 0, device 17, function 0 00:09:51.549 ===================================================== 00:09:51.549 Reservations: Not Supported 00:09:51.549 ===================================================== 00:09:51.549 NVMe Controller at PCI bus 0, device 19, function 0 00:09:51.549 ===================================================== 00:09:51.549 Reservations: Not Supported 00:09:51.549 ===================================================== 00:09:51.549 NVMe Controller at PCI bus 0, device 18, function 0 00:09:51.549 ===================================================== 00:09:51.549 Reservations: Not Supported 00:09:51.549 Reservation test passed 00:09:51.549 00:09:51.549 real 0m0.281s 00:09:51.549 user 0m0.103s 00:09:51.549 sys 0m0.132s 00:09:51.549 11:53:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.549 11:53:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:51.549 ************************************ 00:09:51.549 END TEST nvme_reserve 00:09:51.549 ************************************ 00:09:51.806 11:53:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:51.806 11:53:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.806 11:53:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.806 11:53:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.806 ************************************ 00:09:51.806 START TEST nvme_err_injection 00:09:51.806 ************************************ 00:09:51.806 11:53:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:52.065 NVMe Error Injection test 00:09:52.065 Attached to 0000:00:10.0 00:09:52.065 Attached to 0000:00:11.0 00:09:52.065 Attached to 0000:00:13.0 00:09:52.065 Attached to 0000:00:12.0 00:09:52.065 0000:00:10.0: get features failed as expected 00:09:52.065 0000:00:11.0: get features failed as expected 00:09:52.065 0000:00:13.0: get features failed as expected 00:09:52.065 0000:00:12.0: get features failed as expected 00:09:52.065 0000:00:11.0: get features successfully as expected 00:09:52.065 0000:00:13.0: get features successfully as expected 00:09:52.065 0000:00:12.0: get features successfully as expected 00:09:52.065 0000:00:10.0: get features successfully as expected 00:09:52.065 0000:00:11.0: read failed as expected 00:09:52.065 0000:00:13.0: read failed as expected 00:09:52.065 0000:00:12.0: read failed as expected 00:09:52.065 0000:00:10.0: read failed as expected 00:09:52.065 0000:00:10.0: read successfully as expected 00:09:52.065 0000:00:11.0: read successfully as expected 00:09:52.065 0000:00:13.0: read successfully as expected 00:09:52.065 0000:00:12.0: read successfully as expected 00:09:52.065 Cleaning up... 00:09:52.065 00:09:52.065 real 0m0.323s 00:09:52.065 user 0m0.133s 00:09:52.065 sys 0m0.139s 00:09:52.065 11:53:41 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.065 ************************************ 00:09:52.065 END TEST nvme_err_injection 00:09:52.065 11:53:41 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 ************************************ 00:09:52.065 11:53:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:52.065 11:53:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:52.065 11:53:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.065 11:53:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 ************************************ 00:09:52.065 START TEST nvme_overhead 00:09:52.065 ************************************ 00:09:52.065 11:53:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:53.441 Initializing NVMe Controllers 00:09:53.441 Attached to 0000:00:10.0 00:09:53.441 Attached to 0000:00:11.0 00:09:53.442 Attached to 0000:00:13.0 00:09:53.442 Attached to 0000:00:12.0 00:09:53.442 Initialization complete. Launching workers. 00:09:53.442 submit (in ns) avg, min, max = 13885.5, 11356.6, 106209.6 00:09:53.442 complete (in ns) avg, min, max = 8137.1, 7755.0, 100986.3 00:09:53.442 00:09:53.442 Submit histogram 00:09:53.442 ================ 00:09:53.442 Range in us Cumulative Count 00:09:53.442 11.309 - 11.361: 0.0159% ( 1) 00:09:53.442 12.183 - 12.235: 0.0318% ( 1) 00:09:53.442 12.800 - 12.851: 0.0636% ( 2) 00:09:53.442 12.851 - 12.903: 0.0795% ( 1) 00:09:53.442 12.903 - 12.954: 0.2384% ( 10) 00:09:53.442 12.954 - 13.006: 0.4451% ( 13) 00:09:53.442 13.006 - 13.057: 1.0650% ( 39) 00:09:53.442 13.057 - 13.108: 2.5433% ( 93) 00:09:53.442 13.108 - 13.160: 4.8323% ( 144) 00:09:53.442 13.160 - 13.263: 13.7657% ( 562) 00:09:53.442 13.263 - 13.365: 29.0256% ( 960) 00:09:53.442 13.365 - 13.468: 42.5211% ( 849) 00:09:53.442 13.468 - 13.571: 55.7622% ( 833) 00:09:53.442 13.571 - 13.674: 67.8430% ( 760) 00:09:53.442 13.674 - 13.777: 78.4295% ( 666) 00:09:53.442 13.777 - 13.880: 84.5811% ( 387) 00:09:53.442 13.880 - 13.982: 89.5565% ( 313) 00:09:53.442 13.982 - 14.085: 92.0203% ( 155) 00:09:53.442 14.085 - 14.188: 93.5304% ( 95) 00:09:53.442 14.188 - 14.291: 94.3411% ( 51) 00:09:53.442 14.291 - 14.394: 94.7226% ( 24) 00:09:53.442 14.394 - 14.496: 94.8180% ( 6) 00:09:53.442 14.496 - 14.599: 94.9293% ( 7) 00:09:53.442 14.599 - 14.702: 94.9611% ( 2) 00:09:53.442 14.805 - 14.908: 94.9770% ( 1) 00:09:53.442 14.908 - 15.010: 95.0087% ( 2) 00:09:53.442 15.319 - 15.422: 95.0246% ( 1) 00:09:53.442 15.730 - 15.833: 95.0405% ( 1) 00:09:53.442 16.141 - 16.244: 95.0564% ( 1) 00:09:53.442 16.244 - 16.347: 95.0723% ( 1) 00:09:53.442 16.347 - 16.450: 95.1200% ( 3) 00:09:53.442 16.450 - 16.553: 95.1518% ( 2) 00:09:53.442 16.655 - 16.758: 95.1836% ( 2) 00:09:53.442 16.861 - 16.964: 95.1995% ( 1) 00:09:53.442 16.964 - 17.067: 95.2313% ( 2) 00:09:53.442 17.067 - 17.169: 95.2472% ( 1) 00:09:53.442 17.169 - 17.272: 95.2790% ( 2) 00:09:53.442 17.272 - 17.375: 95.3743% ( 6) 00:09:53.442 17.375 - 17.478: 95.4538% ( 5) 00:09:53.442 17.478 - 17.581: 95.5810% ( 8) 00:09:53.442 17.581 - 17.684: 95.7082% ( 8) 00:09:53.442 17.684 - 17.786: 96.0261% ( 20) 00:09:53.442 17.786 - 17.889: 96.2168% ( 12) 00:09:53.442 17.889 - 17.992: 96.4711% ( 16) 00:09:53.442 17.992 - 18.095: 96.8050% ( 21) 00:09:53.442 18.095 - 18.198: 97.0434% ( 15) 00:09:53.442 18.198 - 18.300: 97.1865% ( 9) 00:09:53.442 18.300 - 18.403: 97.3136% ( 8) 00:09:53.442 18.403 - 18.506: 97.4249% ( 7) 00:09:53.442 18.506 - 18.609: 97.5521% ( 8) 00:09:53.442 18.609 - 18.712: 97.5997% ( 3) 00:09:53.442 18.712 - 18.814: 97.6792% ( 5) 00:09:53.442 18.814 - 18.917: 97.7587% ( 5) 00:09:53.442 18.917 - 19.020: 97.8382% ( 5) 00:09:53.442 19.020 - 19.123: 97.9177% ( 5) 00:09:53.442 19.123 - 19.226: 97.9653% ( 3) 00:09:53.442 19.226 - 19.329: 98.0607% ( 6) 00:09:53.442 19.329 - 19.431: 98.1720% ( 7) 00:09:53.442 19.431 - 19.534: 98.2356% ( 4) 00:09:53.442 19.534 - 19.637: 98.3151% ( 5) 00:09:53.442 19.637 - 19.740: 98.3945% ( 5) 00:09:53.442 19.740 - 19.843: 98.4740% ( 5) 00:09:53.442 19.843 - 19.945: 98.6171% ( 9) 00:09:53.442 19.945 - 20.048: 98.7124% ( 6) 00:09:53.442 20.048 - 20.151: 98.7760% ( 4) 00:09:53.442 20.151 - 20.254: 98.8714% ( 6) 00:09:53.442 20.254 - 20.357: 98.9509% ( 5) 00:09:53.442 20.357 - 20.459: 98.9986% ( 3) 00:09:53.442 20.459 - 20.562: 99.1257% ( 8) 00:09:53.442 20.562 - 20.665: 99.2052% ( 5) 00:09:53.442 20.665 - 20.768: 99.2529% ( 3) 00:09:53.442 20.768 - 20.871: 99.3006% ( 3) 00:09:53.442 20.871 - 20.973: 99.3165% ( 1) 00:09:53.442 20.973 - 21.076: 99.3642% ( 3) 00:09:53.442 21.076 - 21.179: 99.3801% ( 1) 00:09:53.442 21.179 - 21.282: 99.3960% ( 1) 00:09:53.442 21.385 - 21.488: 99.4278% ( 2) 00:09:53.442 21.488 - 21.590: 99.4436% ( 1) 00:09:53.442 22.310 - 22.413: 99.4595% ( 1) 00:09:53.442 22.413 - 22.516: 99.4913% ( 2) 00:09:53.442 22.618 - 22.721: 99.5072% ( 1) 00:09:53.442 22.824 - 22.927: 99.5231% ( 1) 00:09:53.442 22.927 - 23.030: 99.5390% ( 1) 00:09:53.442 23.030 - 23.133: 99.5549% ( 1) 00:09:53.442 23.235 - 23.338: 99.5708% ( 1) 00:09:53.442 23.955 - 24.058: 99.5867% ( 1) 00:09:53.442 24.366 - 24.469: 99.6026% ( 1) 00:09:53.442 24.572 - 24.675: 99.6185% ( 1) 00:09:53.442 24.778 - 24.880: 99.6503% ( 2) 00:09:53.442 25.189 - 25.292: 99.6821% ( 2) 00:09:53.442 25.806 - 25.908: 99.7139% ( 2) 00:09:53.442 25.908 - 26.011: 99.7298% ( 1) 00:09:53.442 26.114 - 26.217: 99.7457% ( 1) 00:09:53.442 26.731 - 26.937: 99.7775% ( 2) 00:09:53.442 27.142 - 27.348: 99.7934% ( 1) 00:09:53.442 27.348 - 27.553: 99.8093% ( 1) 00:09:53.442 30.843 - 31.049: 99.8251% ( 1) 00:09:53.442 31.049 - 31.255: 99.8410% ( 1) 00:09:53.442 34.133 - 34.339: 99.8569% ( 1) 00:09:53.442 35.778 - 35.984: 99.8728% ( 1) 00:09:53.442 36.190 - 36.395: 99.8887% ( 1) 00:09:53.442 41.947 - 42.153: 99.9046% ( 1) 00:09:53.442 42.564 - 42.769: 99.9205% ( 1) 00:09:53.442 66.622 - 67.033: 99.9364% ( 1) 00:09:53.442 71.145 - 71.557: 99.9523% ( 1) 00:09:53.442 90.063 - 90.474: 99.9682% ( 1) 00:09:53.442 98.699 - 99.110: 99.9841% ( 1) 00:09:53.442 106.101 - 106.924: 100.0000% ( 1) 00:09:53.442 00:09:53.442 Complete histogram 00:09:53.442 ================== 00:09:53.442 Range in us Cumulative Count 00:09:53.442 7.711 - 7.762: 0.0159% ( 1) 00:09:53.442 7.762 - 7.814: 3.0043% ( 188) 00:09:53.442 7.814 - 7.865: 20.6326% ( 1109) 00:09:53.442 7.865 - 7.916: 44.1106% ( 1477) 00:09:53.442 7.916 - 7.968: 66.1898% ( 1389) 00:09:53.442 7.968 - 8.019: 78.5090% ( 775) 00:09:53.442 8.019 - 8.071: 85.0262% ( 410) 00:09:53.442 8.071 - 8.122: 89.9857% ( 312) 00:09:53.442 8.122 - 8.173: 92.9264% ( 185) 00:09:53.442 8.173 - 8.225: 94.1027% ( 74) 00:09:53.442 8.225 - 8.276: 94.8339% ( 46) 00:09:53.442 8.276 - 8.328: 95.2154% ( 24) 00:09:53.442 8.328 - 8.379: 95.6128% ( 25) 00:09:53.442 8.379 - 8.431: 95.8671% ( 16) 00:09:53.442 8.431 - 8.482: 96.1055% ( 15) 00:09:53.442 8.482 - 8.533: 96.2804% ( 11) 00:09:53.442 8.533 - 8.585: 96.4711% ( 12) 00:09:53.442 8.585 - 8.636: 96.9639% ( 31) 00:09:53.442 8.636 - 8.688: 97.3136% ( 22) 00:09:53.442 8.688 - 8.739: 97.6474% ( 21) 00:09:53.442 8.739 - 8.790: 97.8541% ( 13) 00:09:53.442 8.790 - 8.842: 97.9971% ( 9) 00:09:53.442 8.842 - 8.893: 98.1084% ( 7) 00:09:53.442 8.893 - 8.945: 98.1720% ( 4) 00:09:53.442 8.945 - 8.996: 98.2038% ( 2) 00:09:53.442 8.996 - 9.047: 98.2197% ( 1) 00:09:53.442 9.047 - 9.099: 98.2356% ( 1) 00:09:53.442 9.150 - 9.202: 98.2515% ( 1) 00:09:53.442 9.202 - 9.253: 98.2674% ( 1) 00:09:53.442 9.510 - 9.561: 98.2833% ( 1) 00:09:53.442 11.052 - 11.104: 98.2992% ( 1) 00:09:53.442 11.772 - 11.823: 98.3151% ( 1) 00:09:53.442 11.926 - 11.978: 98.3309% ( 1) 00:09:53.442 11.978 - 12.029: 98.3468% ( 1) 00:09:53.442 12.389 - 12.440: 98.3627% ( 1) 00:09:53.442 12.954 - 13.006: 98.3786% ( 1) 00:09:53.442 13.160 - 13.263: 98.3945% ( 1) 00:09:53.442 13.263 - 13.365: 98.5058% ( 7) 00:09:53.442 13.365 - 13.468: 98.5376% ( 2) 00:09:53.442 13.468 - 13.571: 98.5535% ( 1) 00:09:53.442 13.571 - 13.674: 98.6489% ( 6) 00:09:53.442 13.674 - 13.777: 98.8237% ( 11) 00:09:53.442 13.777 - 13.880: 98.9032% ( 5) 00:09:53.442 13.880 - 13.982: 98.9668% ( 4) 00:09:53.442 13.982 - 14.085: 99.0622% ( 6) 00:09:53.442 14.085 - 14.188: 99.1416% ( 5) 00:09:53.442 14.188 - 14.291: 99.2211% ( 5) 00:09:53.442 14.291 - 14.394: 99.2370% ( 1) 00:09:53.442 14.394 - 14.496: 99.2688% ( 2) 00:09:53.442 14.496 - 14.599: 99.3642% ( 6) 00:09:53.442 14.599 - 14.702: 99.4436% ( 5) 00:09:53.442 14.702 - 14.805: 99.5072% ( 4) 00:09:53.442 14.805 - 14.908: 99.5390% ( 2) 00:09:53.442 14.908 - 15.010: 99.5549% ( 1) 00:09:53.442 15.010 - 15.113: 99.6026% ( 3) 00:09:53.442 15.216 - 15.319: 99.6185% ( 1) 00:09:53.442 16.039 - 16.141: 99.6344% ( 1) 00:09:53.442 16.347 - 16.450: 99.6662% ( 2) 00:09:53.442 16.655 - 16.758: 99.6821% ( 1) 00:09:53.442 17.169 - 17.272: 99.6980% ( 1) 00:09:53.442 18.198 - 18.300: 99.7139% ( 1) 00:09:53.442 18.403 - 18.506: 99.7298% ( 1) 00:09:53.442 18.814 - 18.917: 99.7457% ( 1) 00:09:53.442 19.843 - 19.945: 99.7616% ( 1) 00:09:53.442 20.459 - 20.562: 99.7775% ( 1) 00:09:53.442 22.104 - 22.207: 99.7934% ( 1) 00:09:53.443 23.647 - 23.749: 99.8093% ( 1) 00:09:53.443 24.469 - 24.572: 99.8251% ( 1) 00:09:53.443 24.778 - 24.880: 99.8410% ( 1) 00:09:53.443 26.320 - 26.525: 99.8569% ( 1) 00:09:53.443 35.161 - 35.367: 99.8728% ( 1) 00:09:53.443 36.806 - 37.012: 99.8887% ( 1) 00:09:53.443 37.629 - 37.835: 99.9046% ( 1) 00:09:53.443 40.713 - 40.919: 99.9205% ( 1) 00:09:53.443 45.854 - 46.059: 99.9364% ( 1) 00:09:53.443 53.873 - 54.284: 99.9523% ( 1) 00:09:53.443 69.912 - 70.323: 99.9682% ( 1) 00:09:53.443 74.435 - 74.847: 99.9841% ( 1) 00:09:53.443 100.755 - 101.166: 100.0000% ( 1) 00:09:53.443 00:09:53.443 00:09:53.443 real 0m1.303s 00:09:53.443 user 0m1.104s 00:09:53.443 sys 0m0.151s 00:09:53.443 11:53:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.443 11:53:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 ************************************ 00:09:53.443 END TEST nvme_overhead 00:09:53.443 ************************************ 00:09:53.443 11:53:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:53.443 11:53:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:53.443 11:53:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.443 11:53:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 ************************************ 00:09:53.443 START TEST nvme_arbitration 00:09:53.443 ************************************ 00:09:53.443 11:53:43 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:57.634 Initializing NVMe Controllers 00:09:57.634 Attached to 0000:00:10.0 00:09:57.634 Attached to 0000:00:11.0 00:09:57.634 Attached to 0000:00:13.0 00:09:57.634 Attached to 0000:00:12.0 00:09:57.634 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:57.634 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:57.634 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:57.634 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:57.634 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:57.634 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:57.634 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:57.634 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:57.634 Initialization complete. Launching workers. 00:09:57.634 Starting thread on core 1 with urgent priority queue 00:09:57.634 Starting thread on core 2 with urgent priority queue 00:09:57.634 Starting thread on core 0 with urgent priority queue 00:09:57.634 Starting thread on core 3 with urgent priority queue 00:09:57.634 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:57.634 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:57.634 QEMU NVMe Ctrl (12341 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:09:57.634 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:09:57.634 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:09:57.634 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:09:57.634 ======================================================== 00:09:57.634 00:09:57.634 00:09:57.634 real 0m3.445s 00:09:57.634 user 0m9.424s 00:09:57.634 sys 0m0.169s 00:09:57.634 11:53:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.634 11:53:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 ************************************ 00:09:57.634 END TEST nvme_arbitration 00:09:57.634 ************************************ 00:09:57.634 11:53:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:57.634 11:53:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.634 11:53:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.634 11:53:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 ************************************ 00:09:57.634 START TEST nvme_single_aen 00:09:57.634 ************************************ 00:09:57.634 11:53:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:57.634 Asynchronous Event Request test 00:09:57.634 Attached to 0000:00:10.0 00:09:57.634 Attached to 0000:00:11.0 00:09:57.634 Attached to 0000:00:13.0 00:09:57.634 Attached to 0000:00:12.0 00:09:57.634 Reset controller to setup AER completions for this process 00:09:57.634 Registering asynchronous event callbacks... 00:09:57.634 Getting orig temperature thresholds of all controllers 00:09:57.634 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.634 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.634 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.634 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.634 Setting all controllers temperature threshold low to trigger AER 00:09:57.634 Waiting for all controllers temperature threshold to be set lower 00:09:57.634 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.634 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:57.634 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.634 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:57.634 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.634 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:57.634 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.634 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:57.634 Waiting for all controllers to trigger AER and reset threshold 00:09:57.634 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.634 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.634 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.634 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.634 Cleaning up... 00:09:57.634 00:09:57.634 real 0m0.311s 00:09:57.634 user 0m0.110s 00:09:57.634 sys 0m0.155s 00:09:57.634 ************************************ 00:09:57.634 END TEST nvme_single_aen 00:09:57.634 ************************************ 00:09:57.634 11:53:47 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.634 11:53:47 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 11:53:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:57.634 11:53:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.634 11:53:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.634 11:53:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 ************************************ 00:09:57.634 START TEST nvme_doorbell_aers 00:09:57.634 ************************************ 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:57.634 11:53:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:57.893 [2024-11-27 11:53:47.708308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:07.905 Executing: test_write_invalid_db 00:10:07.905 Waiting for AER completion... 00:10:07.905 Failure: test_write_invalid_db 00:10:07.905 00:10:07.905 Executing: test_invalid_db_write_overflow_sq 00:10:07.905 Waiting for AER completion... 00:10:07.905 Failure: test_invalid_db_write_overflow_sq 00:10:07.905 00:10:07.905 Executing: test_invalid_db_write_overflow_cq 00:10:07.905 Waiting for AER completion... 00:10:07.905 Failure: test_invalid_db_write_overflow_cq 00:10:07.905 00:10:07.905 11:53:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:07.905 11:53:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:07.905 [2024-11-27 11:53:57.768453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:17.889 Executing: test_write_invalid_db 00:10:17.889 Waiting for AER completion... 00:10:17.889 Failure: test_write_invalid_db 00:10:17.889 00:10:17.889 Executing: test_invalid_db_write_overflow_sq 00:10:17.889 Waiting for AER completion... 00:10:17.889 Failure: test_invalid_db_write_overflow_sq 00:10:17.889 00:10:17.889 Executing: test_invalid_db_write_overflow_cq 00:10:17.889 Waiting for AER completion... 00:10:17.889 Failure: test_invalid_db_write_overflow_cq 00:10:17.889 00:10:17.889 11:54:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:17.889 11:54:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:17.889 [2024-11-27 11:54:07.837779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:27.870 Executing: test_write_invalid_db 00:10:27.871 Waiting for AER completion... 00:10:27.871 Failure: test_write_invalid_db 00:10:27.871 00:10:27.871 Executing: test_invalid_db_write_overflow_sq 00:10:27.871 Waiting for AER completion... 00:10:27.871 Failure: test_invalid_db_write_overflow_sq 00:10:27.871 00:10:27.871 Executing: test_invalid_db_write_overflow_cq 00:10:27.871 Waiting for AER completion... 00:10:27.871 Failure: test_invalid_db_write_overflow_cq 00:10:27.871 00:10:27.871 11:54:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:27.871 11:54:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:27.871 [2024-11-27 11:54:17.885046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:37.853 Executing: test_write_invalid_db 00:10:37.853 Waiting for AER completion... 00:10:37.853 Failure: test_write_invalid_db 00:10:37.853 00:10:37.853 Executing: test_invalid_db_write_overflow_sq 00:10:37.853 Waiting for AER completion... 00:10:37.853 Failure: test_invalid_db_write_overflow_sq 00:10:37.853 00:10:37.853 Executing: test_invalid_db_write_overflow_cq 00:10:37.853 Waiting for AER completion... 00:10:37.853 Failure: test_invalid_db_write_overflow_cq 00:10:37.853 00:10:37.853 00:10:37.853 real 0m40.330s 00:10:37.853 user 0m28.318s 00:10:37.853 sys 0m11.607s 00:10:37.853 11:54:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.853 11:54:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:37.853 ************************************ 00:10:37.853 END TEST nvme_doorbell_aers 00:10:37.853 ************************************ 00:10:37.853 11:54:27 nvme -- nvme/nvme.sh@97 -- # uname 00:10:37.853 11:54:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:37.853 11:54:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:37.853 11:54:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:37.853 11:54:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.853 11:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:37.853 ************************************ 00:10:37.853 START TEST nvme_multi_aen 00:10:37.853 ************************************ 00:10:37.853 11:54:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:38.113 [2024-11-27 11:54:27.965078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.965166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.965183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.966738] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.966783] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.966797] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.968271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.968312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.968326] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.969782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.969821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 [2024-11-27 11:54:27.969835] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64415) is not found. Dropping the request. 00:10:38.113 Child process pid: 64931 00:10:38.372 [Child] Asynchronous Event Request test 00:10:38.372 [Child] Attached to 0000:00:10.0 00:10:38.372 [Child] Attached to 0000:00:11.0 00:10:38.372 [Child] Attached to 0000:00:13.0 00:10:38.372 [Child] Attached to 0000:00:12.0 00:10:38.372 [Child] Registering asynchronous event callbacks... 00:10:38.372 [Child] Getting orig temperature thresholds of all controllers 00:10:38.372 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.372 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.372 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.372 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.373 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:38.373 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 [Child] Cleaning up... 00:10:38.373 Asynchronous Event Request test 00:10:38.373 Attached to 0000:00:10.0 00:10:38.373 Attached to 0000:00:11.0 00:10:38.373 Attached to 0000:00:13.0 00:10:38.373 Attached to 0000:00:12.0 00:10:38.373 Reset controller to setup AER completions for this process 00:10:38.373 Registering asynchronous event callbacks... 00:10:38.373 Getting orig temperature thresholds of all controllers 00:10:38.373 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.373 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.373 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.373 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.373 Setting all controllers temperature threshold low to trigger AER 00:10:38.373 Waiting for all controllers temperature threshold to be set lower 00:10:38.373 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:38.373 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:38.373 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:38.373 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.373 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:38.373 Waiting for all controllers to trigger AER and reset threshold 00:10:38.373 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.373 Cleaning up... 00:10:38.373 00:10:38.373 real 0m0.641s 00:10:38.373 user 0m0.237s 00:10:38.373 sys 0m0.302s 00:10:38.373 11:54:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.373 ************************************ 00:10:38.373 END TEST nvme_multi_aen 00:10:38.373 ************************************ 00:10:38.373 11:54:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:38.373 11:54:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:38.373 11:54:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.373 11:54:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.373 11:54:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.373 ************************************ 00:10:38.373 START TEST nvme_startup 00:10:38.373 ************************************ 00:10:38.373 11:54:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:38.631 Initializing NVMe Controllers 00:10:38.631 Attached to 0000:00:10.0 00:10:38.631 Attached to 0000:00:11.0 00:10:38.631 Attached to 0000:00:13.0 00:10:38.631 Attached to 0000:00:12.0 00:10:38.631 Initialization complete. 00:10:38.631 Time used:191652.547 (us). 00:10:38.891 ************************************ 00:10:38.891 END TEST nvme_startup 00:10:38.891 ************************************ 00:10:38.891 00:10:38.891 real 0m0.290s 00:10:38.891 user 0m0.097s 00:10:38.891 sys 0m0.151s 00:10:38.891 11:54:28 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.891 11:54:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:38.891 11:54:28 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:38.891 11:54:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.891 11:54:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.891 11:54:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.891 ************************************ 00:10:38.891 START TEST nvme_multi_secondary 00:10:38.891 ************************************ 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64987 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64988 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:38.891 11:54:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:42.176 Initializing NVMe Controllers 00:10:42.176 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.176 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.176 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:42.176 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.176 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:42.176 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:42.176 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:42.176 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:42.176 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:42.176 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:42.176 Initialization complete. Launching workers. 00:10:42.176 ======================================================== 00:10:42.176 Latency(us) 00:10:42.176 Device Information : IOPS MiB/s Average min max 00:10:42.176 PCIE (0000:00:10.0) NSID 1 from core 1: 4884.80 19.08 3273.22 1655.60 8682.43 00:10:42.176 PCIE (0000:00:11.0) NSID 1 from core 1: 4884.80 19.08 3274.96 1655.62 8597.29 00:10:42.176 PCIE (0000:00:13.0) NSID 1 from core 1: 4884.80 19.08 3275.03 1688.70 8247.93 00:10:42.176 PCIE (0000:00:12.0) NSID 1 from core 1: 4884.80 19.08 3275.12 1529.55 8054.79 00:10:42.176 PCIE (0000:00:12.0) NSID 2 from core 1: 4884.80 19.08 3275.19 1581.31 7515.52 00:10:42.176 PCIE (0000:00:12.0) NSID 3 from core 1: 4884.80 19.08 3275.50 1450.16 7936.73 00:10:42.176 ======================================================== 00:10:42.176 Total : 29308.77 114.49 3274.84 1450.16 8682.43 00:10:42.176 00:10:42.434 Initializing NVMe Controllers 00:10:42.434 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.434 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.434 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:42.434 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.434 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:42.434 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:42.434 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:42.434 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:42.434 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:42.434 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:42.434 Initialization complete. Launching workers. 00:10:42.434 ======================================================== 00:10:42.434 Latency(us) 00:10:42.434 Device Information : IOPS MiB/s Average min max 00:10:42.434 PCIE (0000:00:10.0) NSID 1 from core 2: 3620.26 14.14 4417.67 1196.29 12743.31 00:10:42.434 PCIE (0000:00:11.0) NSID 1 from core 2: 3620.26 14.14 4419.15 898.23 11187.94 00:10:42.434 PCIE (0000:00:13.0) NSID 1 from core 2: 3620.26 14.14 4418.65 1179.27 12341.66 00:10:42.434 PCIE (0000:00:12.0) NSID 1 from core 2: 3620.26 14.14 4418.64 1230.53 12394.08 00:10:42.435 PCIE (0000:00:12.0) NSID 2 from core 2: 3620.26 14.14 4419.03 1213.88 13562.57 00:10:42.435 PCIE (0000:00:12.0) NSID 3 from core 2: 3620.26 14.14 4419.04 1151.62 13804.08 00:10:42.435 ======================================================== 00:10:42.435 Total : 21721.56 84.85 4418.70 898.23 13804.08 00:10:42.435 00:10:42.435 11:54:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64987 00:10:44.338 Initializing NVMe Controllers 00:10:44.338 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.338 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.338 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.338 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.338 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:44.338 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:44.338 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:44.338 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:44.338 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:44.338 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:44.338 Initialization complete. Launching workers. 00:10:44.338 ======================================================== 00:10:44.338 Latency(us) 00:10:44.338 Device Information : IOPS MiB/s Average min max 00:10:44.338 PCIE (0000:00:10.0) NSID 1 from core 0: 8019.14 31.32 1993.67 926.53 7355.28 00:10:44.338 PCIE (0000:00:11.0) NSID 1 from core 0: 8019.14 31.32 1994.74 935.44 7180.49 00:10:44.338 PCIE (0000:00:13.0) NSID 1 from core 0: 8019.14 31.32 1994.71 829.73 6723.24 00:10:44.338 PCIE (0000:00:12.0) NSID 1 from core 0: 8019.14 31.32 1994.68 796.51 6884.71 00:10:44.338 PCIE (0000:00:12.0) NSID 2 from core 0: 8019.14 31.32 1994.65 759.50 7743.21 00:10:44.339 PCIE (0000:00:12.0) NSID 3 from core 0: 8022.34 31.34 1993.82 706.59 7630.16 00:10:44.339 ======================================================== 00:10:44.339 Total : 48118.05 187.96 1994.38 706.59 7743.21 00:10:44.339 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64988 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65057 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65058 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:44.339 11:54:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:47.625 Initializing NVMe Controllers 00:10:47.625 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.625 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.625 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.625 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.625 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:47.625 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:47.625 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:47.625 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:47.625 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:47.625 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:47.625 Initialization complete. Launching workers. 00:10:47.625 ======================================================== 00:10:47.625 Latency(us) 00:10:47.625 Device Information : IOPS MiB/s Average min max 00:10:47.625 PCIE (0000:00:10.0) NSID 1 from core 1: 5276.12 20.61 3030.46 1066.08 8279.85 00:10:47.625 PCIE (0000:00:11.0) NSID 1 from core 1: 5276.12 20.61 3032.00 1080.03 8426.53 00:10:47.625 PCIE (0000:00:13.0) NSID 1 from core 1: 5276.12 20.61 3032.14 935.54 7130.71 00:10:47.625 PCIE (0000:00:12.0) NSID 1 from core 1: 5276.12 20.61 3032.42 1075.91 7585.29 00:10:47.625 PCIE (0000:00:12.0) NSID 2 from core 1: 5276.12 20.61 3032.55 1060.44 7937.68 00:10:47.625 PCIE (0000:00:12.0) NSID 3 from core 1: 5281.45 20.63 3029.64 1080.69 8385.52 00:10:47.625 ======================================================== 00:10:47.625 Total : 31662.07 123.68 3031.53 935.54 8426.53 00:10:47.625 00:10:47.883 Initializing NVMe Controllers 00:10:47.883 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.883 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.883 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.883 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.883 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:47.883 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:47.883 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:47.883 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:47.883 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:47.883 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:47.883 Initialization complete. Launching workers. 00:10:47.883 ======================================================== 00:10:47.883 Latency(us) 00:10:47.883 Device Information : IOPS MiB/s Average min max 00:10:47.883 PCIE (0000:00:10.0) NSID 1 from core 0: 4833.08 18.88 3307.89 999.37 7244.26 00:10:47.883 PCIE (0000:00:11.0) NSID 1 from core 0: 4833.08 18.88 3309.69 1031.01 7038.41 00:10:47.883 PCIE (0000:00:13.0) NSID 1 from core 0: 4833.08 18.88 3309.62 1031.87 6651.46 00:10:47.883 PCIE (0000:00:12.0) NSID 1 from core 0: 4833.08 18.88 3309.54 1043.07 6922.37 00:10:47.883 PCIE (0000:00:12.0) NSID 2 from core 0: 4833.08 18.88 3309.48 1040.86 7591.49 00:10:47.883 PCIE (0000:00:12.0) NSID 3 from core 0: 4833.08 18.88 3309.41 822.06 7683.35 00:10:47.883 ======================================================== 00:10:47.883 Total : 28998.49 113.28 3309.27 822.06 7683.35 00:10:47.883 00:10:49.785 Initializing NVMe Controllers 00:10:49.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:49.785 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:49.785 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:49.785 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:49.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:49.785 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:49.785 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:49.785 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:49.785 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:49.785 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:49.785 Initialization complete. Launching workers. 00:10:49.785 ======================================================== 00:10:49.785 Latency(us) 00:10:49.785 Device Information : IOPS MiB/s Average min max 00:10:49.785 PCIE (0000:00:10.0) NSID 1 from core 2: 3592.07 14.03 4452.92 1038.11 12203.86 00:10:49.785 PCIE (0000:00:11.0) NSID 1 from core 2: 3592.07 14.03 4454.03 949.05 12720.55 00:10:49.785 PCIE (0000:00:13.0) NSID 1 from core 2: 3592.07 14.03 4453.95 1037.08 13242.42 00:10:49.785 PCIE (0000:00:12.0) NSID 1 from core 2: 3592.07 14.03 4453.42 1048.75 12273.74 00:10:49.785 PCIE (0000:00:12.0) NSID 2 from core 2: 3592.07 14.03 4453.79 1054.50 12561.63 00:10:49.785 PCIE (0000:00:12.0) NSID 3 from core 2: 3592.07 14.03 4453.50 1059.06 12695.22 00:10:49.785 ======================================================== 00:10:49.785 Total : 21552.44 84.19 4453.60 949.05 13242.42 00:10:49.785 00:10:49.785 ************************************ 00:10:49.785 END TEST nvme_multi_secondary 00:10:49.785 ************************************ 00:10:49.785 11:54:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65057 00:10:49.785 11:54:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65058 00:10:49.785 00:10:49.785 real 0m10.861s 00:10:49.785 user 0m18.535s 00:10:49.785 sys 0m1.130s 00:10:49.785 11:54:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.785 11:54:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:49.785 11:54:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:49.785 11:54:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:49.785 11:54:39 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63995 ]] 00:10:49.785 11:54:39 nvme -- common/autotest_common.sh@1094 -- # kill 63995 00:10:49.785 11:54:39 nvme -- common/autotest_common.sh@1095 -- # wait 63995 00:10:49.785 [2024-11-27 11:54:39.706085] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.706563] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.706654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.706708] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.713215] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.713325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.713396] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.713448] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.718170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.718242] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.718272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.718303] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.722888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.722962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.722992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:49.785 [2024-11-27 11:54:39.723023] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64930) is not found. Dropping the request. 00:10:50.043 [2024-11-27 11:54:39.873675] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:10:50.043 11:54:39 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:50.043 11:54:39 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:50.043 11:54:39 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:50.043 11:54:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.043 11:54:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.043 11:54:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.043 ************************************ 00:10:50.043 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:50.043 ************************************ 00:10:50.043 11:54:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:50.043 * Looking for test storage... 00:10:50.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.043 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.043 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.043 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.303 --rc genhtml_branch_coverage=1 00:10:50.303 --rc genhtml_function_coverage=1 00:10:50.303 --rc genhtml_legend=1 00:10:50.303 --rc geninfo_all_blocks=1 00:10:50.303 --rc geninfo_unexecuted_blocks=1 00:10:50.303 00:10:50.303 ' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.303 --rc genhtml_branch_coverage=1 00:10:50.303 --rc genhtml_function_coverage=1 00:10:50.303 --rc genhtml_legend=1 00:10:50.303 --rc geninfo_all_blocks=1 00:10:50.303 --rc geninfo_unexecuted_blocks=1 00:10:50.303 00:10:50.303 ' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.303 --rc genhtml_branch_coverage=1 00:10:50.303 --rc genhtml_function_coverage=1 00:10:50.303 --rc genhtml_legend=1 00:10:50.303 --rc geninfo_all_blocks=1 00:10:50.303 --rc geninfo_unexecuted_blocks=1 00:10:50.303 00:10:50.303 ' 00:10:50.303 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.303 --rc genhtml_branch_coverage=1 00:10:50.303 --rc genhtml_function_coverage=1 00:10:50.303 --rc genhtml_legend=1 00:10:50.303 --rc geninfo_all_blocks=1 00:10:50.304 --rc geninfo_unexecuted_blocks=1 00:10:50.304 00:10:50.304 ' 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65220 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65220 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65220 ']' 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.304 11:54:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:50.571 [2024-11-27 11:54:40.398641] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:50.571 [2024-11-27 11:54:40.398760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65220 ] 00:10:50.571 [2024-11-27 11:54:40.600185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.897 [2024-11-27 11:54:40.720606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.897 [2024-11-27 11:54:40.720769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.897 [2024-11-27 11:54:40.720980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.897 [2024-11-27 11:54:40.721661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:51.896 nvme0n1 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fpvGP.txt 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:51.896 true 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:51.896 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732708481 00:10:51.897 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65255 00:10:51.897 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:51.897 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:51.897 11:54:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:53.801 [2024-11-27 11:54:43.698760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:53.801 [2024-11-27 11:54:43.699143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:53.801 [2024-11-27 11:54:43.699173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:53.801 [2024-11-27 11:54:43.699188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.801 [2024-11-27 11:54:43.701481] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:53.801 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65255 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65255 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65255 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fpvGP.txt 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fpvGP.txt 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65220 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65220 ']' 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65220 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.801 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65220 00:10:54.060 killing process with pid 65220 00:10:54.060 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.060 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.060 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65220' 00:10:54.060 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65220 00:10:54.060 11:54:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65220 00:10:56.593 11:54:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:56.593 11:54:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:56.593 ************************************ 00:10:56.593 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:56.593 ************************************ 00:10:56.593 00:10:56.593 real 0m6.320s 00:10:56.593 user 0m21.861s 00:10:56.593 sys 0m0.833s 00:10:56.593 11:54:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.593 11:54:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:56.593 11:54:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:56.593 11:54:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:56.593 11:54:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.593 11:54:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.593 11:54:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.593 ************************************ 00:10:56.593 START TEST nvme_fio 00:10:56.593 ************************************ 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:56.593 11:54:46 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:56.593 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:56.852 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:56.852 11:54:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:57.109 11:54:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:57.110 11:54:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:57.110 11:54:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:57.369 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:57.369 fio-3.35 00:10:57.369 Starting 1 thread 00:11:01.574 00:11:01.574 test: (groupid=0, jobs=1): err= 0: pid=65402: Wed Nov 27 11:54:50 2024 00:11:01.574 read: IOPS=21.9k, BW=85.7MiB/s (89.8MB/s)(171MiB/2001msec) 00:11:01.574 slat (nsec): min=3740, max=63096, avg=4618.81, stdev=1295.45 00:11:01.574 clat (usec): min=275, max=11418, avg=2914.19, stdev=533.70 00:11:01.574 lat (usec): min=281, max=11481, avg=2918.81, stdev=534.33 00:11:01.574 clat percentiles (usec): 00:11:01.574 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:01.574 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:01.574 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3425], 00:11:01.574 | 99.00th=[ 5407], 99.50th=[ 6325], 99.90th=[ 8586], 99.95th=[ 9110], 00:11:01.574 | 99.99th=[10945] 00:11:01.574 bw ( KiB/s): min=82192, max=91776, per=99.34%, avg=87128.00, stdev=4798.49, samples=3 00:11:01.574 iops : min=20548, max=22944, avg=21782.00, stdev=1199.62, samples=3 00:11:01.574 write: IOPS=21.8k, BW=85.1MiB/s (89.2MB/s)(170MiB/2001msec); 0 zone resets 00:11:01.574 slat (usec): min=3, max=113, avg= 4.79, stdev= 1.45 00:11:01.574 clat (usec): min=202, max=11043, avg=2916.38, stdev=522.76 00:11:01.574 lat (usec): min=207, max=11056, avg=2921.17, stdev=523.42 00:11:01.574 clat percentiles (usec): 00:11:01.574 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:01.574 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:11:01.574 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3425], 00:11:01.574 | 99.00th=[ 5276], 99.50th=[ 6390], 99.90th=[ 8586], 99.95th=[ 9503], 00:11:01.574 | 99.99th=[10814] 00:11:01.574 bw ( KiB/s): min=81880, max=91744, per=100.00%, avg=87344.00, stdev=5017.34, samples=3 00:11:01.574 iops : min=20470, max=22936, avg=21836.00, stdev=1254.33, samples=3 00:11:01.574 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:01.574 lat (msec) : 2=0.61%, 4=96.55%, 10=2.76%, 20=0.03% 00:11:01.574 cpu : usr=99.35%, sys=0.00%, ctx=3, majf=0, minf=607 00:11:01.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:01.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.574 issued rwts: total=43876,43583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.574 00:11:01.574 Run status group 0 (all jobs): 00:11:01.574 READ: bw=85.7MiB/s (89.8MB/s), 85.7MiB/s-85.7MiB/s (89.8MB/s-89.8MB/s), io=171MiB (180MB), run=2001-2001msec 00:11:01.574 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=170MiB (179MB), run=2001-2001msec 00:11:01.574 ----------------------------------------------------- 00:11:01.574 Suppressions used: 00:11:01.574 count bytes template 00:11:01.574 1 32 /usr/src/fio/parse.c 00:11:01.574 1 8 libtcmalloc_minimal.so 00:11:01.574 ----------------------------------------------------- 00:11:01.574 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:01.574 11:54:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:01.574 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:01.574 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.575 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:01.833 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:01.833 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:01.833 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:01.833 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:01.833 11:54:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:01.833 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:01.833 fio-3.35 00:11:01.833 Starting 1 thread 00:11:06.023 00:11:06.023 test: (groupid=0, jobs=1): err= 0: pid=65468: Wed Nov 27 11:54:55 2024 00:11:06.023 read: IOPS=21.6k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec) 00:11:06.023 slat (nsec): min=4087, max=90840, avg=5144.14, stdev=1303.92 00:11:06.023 clat (usec): min=230, max=12017, avg=2946.31, stdev=336.32 00:11:06.023 lat (usec): min=234, max=12108, avg=2951.46, stdev=336.81 00:11:06.024 clat percentiles (usec): 00:11:06.024 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:11:06.024 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:06.024 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3228], 00:11:06.024 | 99.00th=[ 4080], 99.50th=[ 5276], 99.90th=[ 6718], 99.95th=[ 9503], 00:11:06.024 | 99.99th=[11731] 00:11:06.024 bw ( KiB/s): min=81320, max=88904, per=98.64%, avg=85410.67, stdev=3827.12, samples=3 00:11:06.024 iops : min=20330, max=22226, avg=21352.67, stdev=956.78, samples=3 00:11:06.024 write: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:11:06.024 slat (nsec): min=4353, max=51037, avg=5581.04, stdev=1307.82 00:11:06.024 clat (usec): min=172, max=11751, avg=2960.22, stdev=349.33 00:11:06.024 lat (usec): min=178, max=11802, avg=2965.80, stdev=349.81 00:11:06.024 clat percentiles (usec): 00:11:06.024 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2835], 00:11:06.024 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:06.024 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3261], 00:11:06.024 | 99.00th=[ 4228], 99.50th=[ 5342], 99.90th=[ 7242], 99.95th=[ 9765], 00:11:06.024 | 99.99th=[11469] 00:11:06.024 bw ( KiB/s): min=81224, max=89424, per=99.60%, avg=85592.00, stdev=4126.19, samples=3 00:11:06.024 iops : min=20306, max=22356, avg=21398.00, stdev=1031.55, samples=3 00:11:06.024 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:06.024 lat (msec) : 2=0.05%, 4=98.82%, 10=1.05%, 20=0.04% 00:11:06.024 cpu : usr=99.35%, sys=0.10%, ctx=4, majf=0, minf=607 00:11:06.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:06.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.024 issued rwts: total=43316,42988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.024 00:11:06.024 Run status group 0 (all jobs): 00:11:06.024 READ: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:11:06.024 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:11:06.024 ----------------------------------------------------- 00:11:06.024 Suppressions used: 00:11:06.024 count bytes template 00:11:06.024 1 32 /usr/src/fio/parse.c 00:11:06.024 1 8 libtcmalloc_minimal.so 00:11:06.024 ----------------------------------------------------- 00:11:06.024 00:11:06.024 11:54:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:06.024 11:54:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:06.024 11:54:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:06.024 11:54:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:06.283 11:54:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:06.283 11:54:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:06.543 11:54:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:06.543 11:54:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:06.543 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:06.544 11:54:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:06.803 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:06.803 fio-3.35 00:11:06.803 Starting 1 thread 00:11:11.005 00:11:11.005 test: (groupid=0, jobs=1): err= 0: pid=65534: Wed Nov 27 11:55:00 2024 00:11:11.005 read: IOPS=22.4k, BW=87.6MiB/s (91.8MB/s)(175MiB/2001msec) 00:11:11.005 slat (nsec): min=3932, max=91731, avg=4510.29, stdev=1229.54 00:11:11.005 clat (usec): min=290, max=11622, avg=2846.64, stdev=515.30 00:11:11.005 lat (usec): min=294, max=11674, avg=2851.15, stdev=516.07 00:11:11.005 clat percentiles (usec): 00:11:11.005 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:11.005 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:11.005 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2999], 00:11:11.005 | 99.00th=[ 5604], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9110], 00:11:11.005 | 99.99th=[11338] 00:11:11.005 bw ( KiB/s): min=84176, max=92328, per=98.48%, avg=88330.67, stdev=4078.28, samples=3 00:11:11.005 iops : min=21044, max=23082, avg=22082.67, stdev=1019.57, samples=3 00:11:11.005 write: IOPS=22.3k, BW=87.0MiB/s (91.3MB/s)(174MiB/2001msec); 0 zone resets 00:11:11.005 slat (nsec): min=4050, max=48752, avg=4701.03, stdev=1195.20 00:11:11.005 clat (usec): min=343, max=11495, avg=2853.69, stdev=515.06 00:11:11.005 lat (usec): min=348, max=11507, avg=2858.39, stdev=515.80 00:11:11.005 clat percentiles (usec): 00:11:11.005 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:11.005 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:11:11.005 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2999], 00:11:11.005 | 99.00th=[ 5538], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9372], 00:11:11.005 | 99.99th=[10814] 00:11:11.005 bw ( KiB/s): min=84080, max=93048, per=99.26%, avg=88472.00, stdev=4486.83, samples=3 00:11:11.005 iops : min=21020, max=23262, avg=22118.00, stdev=1121.71, samples=3 00:11:11.005 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:11.005 lat (msec) : 2=0.05%, 4=98.04%, 10=1.84%, 20=0.03% 00:11:11.005 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=606 00:11:11.005 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:11.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.005 issued rwts: total=44870,44588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.005 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.005 00:11:11.005 Run status group 0 (all jobs): 00:11:11.005 READ: bw=87.6MiB/s (91.8MB/s), 87.6MiB/s-87.6MiB/s (91.8MB/s-91.8MB/s), io=175MiB (184MB), run=2001-2001msec 00:11:11.005 WRITE: bw=87.0MiB/s (91.3MB/s), 87.0MiB/s-87.0MiB/s (91.3MB/s-91.3MB/s), io=174MiB (183MB), run=2001-2001msec 00:11:11.005 ----------------------------------------------------- 00:11:11.005 Suppressions used: 00:11:11.005 count bytes template 00:11:11.005 1 32 /usr/src/fio/parse.c 00:11:11.005 1 8 libtcmalloc_minimal.so 00:11:11.005 ----------------------------------------------------- 00:11:11.005 00:11:11.005 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:11.006 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:11.006 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:11.006 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:11.006 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:11.006 11:55:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:11.265 11:55:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:11.265 11:55:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:11.265 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:11.525 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:11.525 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:11.525 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:11.525 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:11.526 11:55:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:11.526 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:11.526 fio-3.35 00:11:11.526 Starting 1 thread 00:11:16.831 00:11:16.831 test: (groupid=0, jobs=1): err= 0: pid=65595: Wed Nov 27 11:55:06 2024 00:11:16.831 read: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec) 00:11:16.831 slat (nsec): min=3687, max=82926, avg=4672.11, stdev=1251.64 00:11:16.831 clat (usec): min=180, max=10973, avg=2920.44, stdev=405.78 00:11:16.831 lat (usec): min=184, max=11056, avg=2925.12, stdev=406.46 00:11:16.831 clat percentiles (usec): 00:11:16.831 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:16.831 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:16.831 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3163], 00:11:16.831 | 99.00th=[ 4752], 99.50th=[ 5342], 99.90th=[ 7832], 99.95th=[ 9503], 00:11:16.831 | 99.99th=[10814] 00:11:16.831 bw ( KiB/s): min=85632, max=90952, per=100.00%, avg=88437.33, stdev=2671.88, samples=3 00:11:16.831 iops : min=21408, max=22738, avg=22109.33, stdev=667.97, samples=3 00:11:16.831 write: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec); 0 zone resets 00:11:16.831 slat (nsec): min=3769, max=48924, avg=4861.79, stdev=1213.06 00:11:16.831 clat (usec): min=196, max=11128, avg=2930.27, stdev=421.51 00:11:16.831 lat (usec): min=202, max=11132, avg=2935.14, stdev=422.13 00:11:16.831 clat percentiles (usec): 00:11:16.831 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:16.831 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:16.831 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3195], 00:11:16.831 | 99.00th=[ 4817], 99.50th=[ 5342], 99.90th=[ 8586], 99.95th=[ 9765], 00:11:16.831 | 99.99th=[10683] 00:11:16.831 bw ( KiB/s): min=86888, max=90704, per=100.00%, avg=88578.67, stdev=1944.78, samples=3 00:11:16.831 iops : min=21722, max=22676, avg=22144.67, stdev=486.19, samples=3 00:11:16.831 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:16.831 lat (msec) : 2=0.21%, 4=98.21%, 10=1.50%, 20=0.04% 00:11:16.831 cpu : usr=99.40%, sys=0.10%, ctx=5, majf=0, minf=605 00:11:16.831 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:16.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.831 issued rwts: total=43730,43417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.831 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.831 00:11:16.831 Run status group 0 (all jobs): 00:11:16.831 READ: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:16.831 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:16.831 ----------------------------------------------------- 00:11:16.831 Suppressions used: 00:11:16.831 count bytes template 00:11:16.831 1 32 /usr/src/fio/parse.c 00:11:16.831 1 8 libtcmalloc_minimal.so 00:11:16.831 ----------------------------------------------------- 00:11:16.831 00:11:16.831 ************************************ 00:11:16.831 END TEST nvme_fio 00:11:16.831 ************************************ 00:11:16.831 11:55:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:16.831 11:55:06 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:16.831 00:11:16.831 real 0m20.100s 00:11:16.831 user 0m14.854s 00:11:16.832 sys 0m6.321s 00:11:16.832 11:55:06 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.832 11:55:06 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 ************************************ 00:11:16.832 END TEST nvme 00:11:16.832 ************************************ 00:11:16.832 00:11:16.832 real 1m35.232s 00:11:16.832 user 3m41.091s 00:11:16.832 sys 0m26.606s 00:11:16.832 11:55:06 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.832 11:55:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 11:55:06 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:16.832 11:55:06 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:16.832 11:55:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.832 11:55:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.832 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:11:16.832 ************************************ 00:11:16.832 START TEST nvme_scc 00:11:16.832 ************************************ 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:16.832 * Looking for test storage... 00:11:16.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.832 --rc genhtml_branch_coverage=1 00:11:16.832 --rc genhtml_function_coverage=1 00:11:16.832 --rc genhtml_legend=1 00:11:16.832 --rc geninfo_all_blocks=1 00:11:16.832 --rc geninfo_unexecuted_blocks=1 00:11:16.832 00:11:16.832 ' 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.832 --rc genhtml_branch_coverage=1 00:11:16.832 --rc genhtml_function_coverage=1 00:11:16.832 --rc genhtml_legend=1 00:11:16.832 --rc geninfo_all_blocks=1 00:11:16.832 --rc geninfo_unexecuted_blocks=1 00:11:16.832 00:11:16.832 ' 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.832 --rc genhtml_branch_coverage=1 00:11:16.832 --rc genhtml_function_coverage=1 00:11:16.832 --rc genhtml_legend=1 00:11:16.832 --rc geninfo_all_blocks=1 00:11:16.832 --rc geninfo_unexecuted_blocks=1 00:11:16.832 00:11:16.832 ' 00:11:16.832 11:55:06 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.832 --rc genhtml_branch_coverage=1 00:11:16.832 --rc genhtml_function_coverage=1 00:11:16.832 --rc genhtml_legend=1 00:11:16.832 --rc geninfo_all_blocks=1 00:11:16.832 --rc geninfo_unexecuted_blocks=1 00:11:16.832 00:11:16.832 ' 00:11:16.832 11:55:06 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.832 11:55:06 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:16.832 11:55:06 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:16.832 11:55:06 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:16.832 11:55:06 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.832 11:55:06 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.832 11:55:06 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.832 11:55:06 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.832 11:55:06 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.832 11:55:06 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:16.832 11:55:06 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.832 11:55:06 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:16.833 11:55:06 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:16.833 11:55:06 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.833 11:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:16.833 11:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:16.833 11:55:06 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:16.833 11:55:06 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.663 Waiting for block devices as requested 00:11:17.923 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.923 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.184 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.184 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.468 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.468 11:55:13 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:23.468 11:55:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.468 11:55:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:23.468 11:55:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.468 11:55:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:23.468 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.469 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.470 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:23.471 11:55:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.471 11:55:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:23.471 11:55:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.471 11:55:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.471 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.472 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.473 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.474 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:23.475 11:55:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.475 11:55:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:23.475 11:55:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.475 11:55:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.475 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.476 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.741 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.742 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.743 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.744 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:23.745 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.746 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.747 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.748 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.749 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:23.750 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:23.751 11:55:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:23.751 11:55:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:23.751 11:55:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:23.751 11:55:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:23.751 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:23.752 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:24.013 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.014 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:24.015 11:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.015 11:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:24.016 11:55:13 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:24.016 11:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:24.016 11:55:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:24.016 11:55:13 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.523 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.523 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.523 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.783 11:55:15 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:25.783 11:55:15 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.783 11:55:15 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.783 11:55:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:25.783 ************************************ 00:11:25.783 START TEST nvme_simple_copy 00:11:25.783 ************************************ 00:11:25.783 11:55:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:26.042 Initializing NVMe Controllers 00:11:26.042 Attaching to 0000:00:10.0 00:11:26.042 Controller supports SCC. Attached to 0000:00:10.0 00:11:26.042 Namespace ID: 1 size: 6GB 00:11:26.042 Initialization complete. 00:11:26.042 00:11:26.042 Controller QEMU NVMe Ctrl (12340 ) 00:11:26.042 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:26.042 Namespace Block Size:4096 00:11:26.042 Writing LBAs 0 to 63 with Random Data 00:11:26.042 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:26.042 LBAs matching Written Data: 64 00:11:26.042 00:11:26.042 real 0m0.344s 00:11:26.042 user 0m0.119s 00:11:26.042 sys 0m0.122s 00:11:26.042 ************************************ 00:11:26.042 END TEST nvme_simple_copy 00:11:26.042 ************************************ 00:11:26.042 11:55:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.042 11:55:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.042 ************************************ 00:11:26.042 END TEST nvme_scc 00:11:26.042 ************************************ 00:11:26.042 00:11:26.042 real 0m9.461s 00:11:26.042 user 0m1.635s 00:11:26.042 sys 0m2.583s 00:11:26.042 11:55:16 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.042 11:55:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:26.042 11:55:16 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:26.042 11:55:16 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:26.042 11:55:16 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:26.042 11:55:16 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:26.042 11:55:16 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:26.042 11:55:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.043 11:55:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.043 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:11:26.328 ************************************ 00:11:26.328 START TEST nvme_fdp 00:11:26.328 ************************************ 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:26.328 * Looking for test storage... 00:11:26.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.328 11:55:16 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:26.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.328 --rc genhtml_branch_coverage=1 00:11:26.328 --rc genhtml_function_coverage=1 00:11:26.328 --rc genhtml_legend=1 00:11:26.328 --rc geninfo_all_blocks=1 00:11:26.328 --rc geninfo_unexecuted_blocks=1 00:11:26.328 00:11:26.328 ' 00:11:26.328 11:55:16 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:26.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.328 --rc genhtml_branch_coverage=1 00:11:26.328 --rc genhtml_function_coverage=1 00:11:26.328 --rc genhtml_legend=1 00:11:26.328 --rc geninfo_all_blocks=1 00:11:26.329 --rc geninfo_unexecuted_blocks=1 00:11:26.329 00:11:26.329 ' 00:11:26.329 11:55:16 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.329 --rc genhtml_branch_coverage=1 00:11:26.329 --rc genhtml_function_coverage=1 00:11:26.329 --rc genhtml_legend=1 00:11:26.329 --rc geninfo_all_blocks=1 00:11:26.329 --rc geninfo_unexecuted_blocks=1 00:11:26.329 00:11:26.329 ' 00:11:26.329 11:55:16 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:26.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.329 --rc genhtml_branch_coverage=1 00:11:26.329 --rc genhtml_function_coverage=1 00:11:26.329 --rc genhtml_legend=1 00:11:26.329 --rc geninfo_all_blocks=1 00:11:26.329 --rc geninfo_unexecuted_blocks=1 00:11:26.329 00:11:26.329 ' 00:11:26.329 11:55:16 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.329 11:55:16 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.329 11:55:16 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.329 11:55:16 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.329 11:55:16 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.329 11:55:16 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.329 11:55:16 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.329 11:55:16 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.329 11:55:16 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:26.329 11:55:16 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:26.329 11:55:16 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:26.329 11:55:16 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.329 11:55:16 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:26.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.195 Waiting for block devices as requested 00:11:27.454 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.454 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.714 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.714 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.002 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:33.002 11:55:22 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:33.002 11:55:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:33.002 11:55:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:33.002 11:55:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.002 11:55:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.002 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.003 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:33.004 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.005 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.006 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.007 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:33.008 11:55:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:33.008 11:55:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:33.008 11:55:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.008 11:55:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:33.008 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:33.009 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.010 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.011 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.012 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.013 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.014 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:33.279 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.280 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:33.281 11:55:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:33.281 11:55:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:33.281 11:55:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.281 11:55:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.281 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.282 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:33.283 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.284 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.285 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.286 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.287 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.288 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.289 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.290 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.291 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.292 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:33.293 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:33.294 11:55:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:33.294 11:55:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:33.294 11:55:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:33.294 11:55:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:33.294 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.555 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:33.556 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:33.557 11:55:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:33.557 11:55:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:33.558 11:55:23 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:33.558 11:55:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:33.558 11:55:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:33.558 11:55:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:34.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.066 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.066 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.066 11:55:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:35.066 11:55:25 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.066 11:55:25 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.066 11:55:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:35.066 ************************************ 00:11:35.066 START TEST nvme_flexible_data_placement 00:11:35.066 ************************************ 00:11:35.066 11:55:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:35.326 Initializing NVMe Controllers 00:11:35.326 Attaching to 0000:00:13.0 00:11:35.326 Controller supports FDP Attached to 0000:00:13.0 00:11:35.326 Namespace ID: 1 Endurance Group ID: 1 00:11:35.326 Initialization complete. 00:11:35.326 00:11:35.326 ================================== 00:11:35.326 == FDP tests for Namespace: #01 == 00:11:35.326 ================================== 00:11:35.326 00:11:35.326 Get Feature: FDP: 00:11:35.326 ================= 00:11:35.326 Enabled: Yes 00:11:35.326 FDP configuration Index: 0 00:11:35.326 00:11:35.326 FDP configurations log page 00:11:35.326 =========================== 00:11:35.326 Number of FDP configurations: 1 00:11:35.326 Version: 0 00:11:35.326 Size: 112 00:11:35.326 FDP Configuration Descriptor: 0 00:11:35.326 Descriptor Size: 96 00:11:35.326 Reclaim Group Identifier format: 2 00:11:35.327 FDP Volatile Write Cache: Not Present 00:11:35.327 FDP Configuration: Valid 00:11:35.327 Vendor Specific Size: 0 00:11:35.327 Number of Reclaim Groups: 2 00:11:35.327 Number of Recalim Unit Handles: 8 00:11:35.327 Max Placement Identifiers: 128 00:11:35.327 Number of Namespaces Suppprted: 256 00:11:35.327 Reclaim unit Nominal Size: 6000000 bytes 00:11:35.327 Estimated Reclaim Unit Time Limit: Not Reported 00:11:35.327 RUH Desc #000: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #001: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #002: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #003: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #004: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #005: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #006: RUH Type: Initially Isolated 00:11:35.327 RUH Desc #007: RUH Type: Initially Isolated 00:11:35.327 00:11:35.327 FDP reclaim unit handle usage log page 00:11:35.327 ====================================== 00:11:35.327 Number of Reclaim Unit Handles: 8 00:11:35.327 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:35.327 RUH Usage Desc #001: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #002: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #003: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #004: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #005: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #006: RUH Attributes: Unused 00:11:35.327 RUH Usage Desc #007: RUH Attributes: Unused 00:11:35.327 00:11:35.327 FDP statistics log page 00:11:35.327 ======================= 00:11:35.327 Host bytes with metadata written: 972242944 00:11:35.327 Media bytes with metadata written: 972353536 00:11:35.327 Media bytes erased: 0 00:11:35.327 00:11:35.327 FDP Reclaim unit handle status 00:11:35.327 ============================== 00:11:35.327 Number of RUHS descriptors: 2 00:11:35.327 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000020cc 00:11:35.327 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:35.327 00:11:35.327 FDP write on placement id: 0 success 00:11:35.327 00:11:35.327 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:35.327 00:11:35.327 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:35.327 00:11:35.327 Get Feature: FDP Events for Placement handle: #0 00:11:35.327 ======================== 00:11:35.327 Number of FDP Events: 6 00:11:35.327 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:35.327 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:35.327 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:35.327 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:35.327 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:35.327 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:35.327 00:11:35.327 FDP events log page 00:11:35.327 =================== 00:11:35.327 Number of FDP events: 1 00:11:35.327 FDP Event #0: 00:11:35.327 Event Type: RU Not Written to Capacity 00:11:35.327 Placement Identifier: Valid 00:11:35.327 NSID: Valid 00:11:35.327 Location: Valid 00:11:35.327 Placement Identifier: 0 00:11:35.327 Event Timestamp: 7 00:11:35.327 Namespace Identifier: 1 00:11:35.327 Reclaim Group Identifier: 0 00:11:35.327 Reclaim Unit Handle Identifier: 0 00:11:35.327 00:11:35.327 FDP test passed 00:11:35.327 00:11:35.327 real 0m0.295s 00:11:35.327 user 0m0.092s 00:11:35.327 sys 0m0.102s 00:11:35.327 11:55:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.327 11:55:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:35.327 ************************************ 00:11:35.327 END TEST nvme_flexible_data_placement 00:11:35.327 ************************************ 00:11:35.587 00:11:35.587 real 0m9.344s 00:11:35.587 user 0m1.789s 00:11:35.587 sys 0m2.652s 00:11:35.587 11:55:25 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.587 11:55:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:35.587 ************************************ 00:11:35.587 END TEST nvme_fdp 00:11:35.587 ************************************ 00:11:35.587 11:55:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:35.587 11:55:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:35.587 11:55:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.587 11:55:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.587 11:55:25 -- common/autotest_common.sh@10 -- # set +x 00:11:35.587 ************************************ 00:11:35.587 START TEST nvme_rpc 00:11:35.587 ************************************ 00:11:35.587 11:55:25 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:35.847 * Looking for test storage... 00:11:35.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.847 11:55:25 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.847 --rc genhtml_branch_coverage=1 00:11:35.847 --rc genhtml_function_coverage=1 00:11:35.847 --rc genhtml_legend=1 00:11:35.847 --rc geninfo_all_blocks=1 00:11:35.847 --rc geninfo_unexecuted_blocks=1 00:11:35.847 00:11:35.847 ' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.847 --rc genhtml_branch_coverage=1 00:11:35.847 --rc genhtml_function_coverage=1 00:11:35.847 --rc genhtml_legend=1 00:11:35.847 --rc geninfo_all_blocks=1 00:11:35.847 --rc geninfo_unexecuted_blocks=1 00:11:35.847 00:11:35.847 ' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.847 --rc genhtml_branch_coverage=1 00:11:35.847 --rc genhtml_function_coverage=1 00:11:35.847 --rc genhtml_legend=1 00:11:35.847 --rc geninfo_all_blocks=1 00:11:35.847 --rc geninfo_unexecuted_blocks=1 00:11:35.847 00:11:35.847 ' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.847 --rc genhtml_branch_coverage=1 00:11:35.847 --rc genhtml_function_coverage=1 00:11:35.847 --rc genhtml_legend=1 00:11:35.847 --rc geninfo_all_blocks=1 00:11:35.847 --rc geninfo_unexecuted_blocks=1 00:11:35.847 00:11:35.847 ' 00:11:35.847 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.847 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:35.847 11:55:25 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:35.848 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:35.848 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67017 00:11:35.848 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:35.848 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:35.848 11:55:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67017 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67017 ']' 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.848 11:55:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.107 [2024-11-27 11:55:25.980870] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:36.107 [2024-11-27 11:55:25.981005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67017 ] 00:11:36.366 [2024-11-27 11:55:26.162686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.366 [2024-11-27 11:55:26.267140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.366 [2024-11-27 11:55:26.267983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.305 11:55:27 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.305 11:55:27 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:37.305 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:37.564 Nvme0n1 00:11:37.564 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:37.564 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:37.564 request: 00:11:37.564 { 00:11:37.564 "bdev_name": "Nvme0n1", 00:11:37.564 "filename": "non_existing_file", 00:11:37.564 "method": "bdev_nvme_apply_firmware", 00:11:37.564 "req_id": 1 00:11:37.564 } 00:11:37.564 Got JSON-RPC error response 00:11:37.564 response: 00:11:37.564 { 00:11:37.564 "code": -32603, 00:11:37.564 "message": "open file failed." 00:11:37.564 } 00:11:37.564 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:37.564 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:37.564 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:37.824 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:37.824 11:55:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67017 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67017 ']' 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67017 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67017 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67017' 00:11:37.824 killing process with pid 67017 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67017 00:11:37.824 11:55:27 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67017 00:11:40.359 00:11:40.359 real 0m4.496s 00:11:40.359 user 0m8.119s 00:11:40.359 sys 0m0.789s 00:11:40.359 11:55:30 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.359 ************************************ 00:11:40.359 11:55:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.359 END TEST nvme_rpc 00:11:40.359 ************************************ 00:11:40.359 11:55:30 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:40.359 11:55:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.359 11:55:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.359 11:55:30 -- common/autotest_common.sh@10 -- # set +x 00:11:40.359 ************************************ 00:11:40.359 START TEST nvme_rpc_timeouts 00:11:40.359 ************************************ 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:40.359 * Looking for test storage... 00:11:40.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.359 11:55:30 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.359 --rc genhtml_branch_coverage=1 00:11:40.359 --rc genhtml_function_coverage=1 00:11:40.359 --rc genhtml_legend=1 00:11:40.359 --rc geninfo_all_blocks=1 00:11:40.359 --rc geninfo_unexecuted_blocks=1 00:11:40.359 00:11:40.359 ' 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.359 --rc genhtml_branch_coverage=1 00:11:40.359 --rc genhtml_function_coverage=1 00:11:40.359 --rc genhtml_legend=1 00:11:40.359 --rc geninfo_all_blocks=1 00:11:40.359 --rc geninfo_unexecuted_blocks=1 00:11:40.359 00:11:40.359 ' 00:11:40.359 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.359 --rc genhtml_branch_coverage=1 00:11:40.359 --rc genhtml_function_coverage=1 00:11:40.360 --rc genhtml_legend=1 00:11:40.360 --rc geninfo_all_blocks=1 00:11:40.360 --rc geninfo_unexecuted_blocks=1 00:11:40.360 00:11:40.360 ' 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.360 --rc genhtml_branch_coverage=1 00:11:40.360 --rc genhtml_function_coverage=1 00:11:40.360 --rc genhtml_legend=1 00:11:40.360 --rc geninfo_all_blocks=1 00:11:40.360 --rc geninfo_unexecuted_blocks=1 00:11:40.360 00:11:40.360 ' 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67089 00:11:40.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67089 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67126 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:40.360 11:55:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67126 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67126 ']' 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.360 11:55:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:40.619 [2024-11-27 11:55:30.443799] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:40.619 [2024-11-27 11:55:30.444101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67126 ] 00:11:40.619 [2024-11-27 11:55:30.630803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:40.878 [2024-11-27 11:55:30.742549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.878 [2024-11-27 11:55:30.742583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.814 11:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.814 11:55:31 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:41.814 11:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:41.814 Checking default timeout settings: 00:11:41.814 11:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:42.072 11:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:42.072 Making settings changes with rpc: 00:11:42.072 11:55:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:42.330 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:42.330 Check default vs. modified settings: 00:11:42.330 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 Setting action_on_timeout is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 Setting timeout_us is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:42.590 Setting timeout_admin_us is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67089 /tmp/settings_modified_67089 00:11:42.590 11:55:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67126 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67126 ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67126 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67126 00:11:42.590 killing process with pid 67126 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67126' 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67126 00:11:42.590 11:55:32 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67126 00:11:45.126 RPC TIMEOUT SETTING TEST PASSED. 00:11:45.126 11:55:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:45.126 ************************************ 00:11:45.126 END TEST nvme_rpc_timeouts 00:11:45.126 ************************************ 00:11:45.126 00:11:45.126 real 0m4.812s 00:11:45.126 user 0m8.999s 00:11:45.126 sys 0m0.813s 00:11:45.126 11:55:34 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.126 11:55:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:45.126 11:55:34 -- spdk/autotest.sh@239 -- # uname -s 00:11:45.126 11:55:34 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:45.126 11:55:34 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:45.126 11:55:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.126 11:55:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.126 11:55:34 -- common/autotest_common.sh@10 -- # set +x 00:11:45.126 ************************************ 00:11:45.126 START TEST sw_hotplug 00:11:45.126 ************************************ 00:11:45.126 11:55:34 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:45.126 * Looking for test storage... 00:11:45.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:45.126 11:55:35 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:45.126 11:55:35 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:11:45.126 11:55:35 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:45.385 11:55:35 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:45.385 11:55:35 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.385 11:55:35 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.385 11:55:35 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.386 11:55:35 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:45.386 11:55:35 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.386 11:55:35 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.386 --rc genhtml_branch_coverage=1 00:11:45.386 --rc genhtml_function_coverage=1 00:11:45.386 --rc genhtml_legend=1 00:11:45.386 --rc geninfo_all_blocks=1 00:11:45.386 --rc geninfo_unexecuted_blocks=1 00:11:45.386 00:11:45.386 ' 00:11:45.386 11:55:35 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.386 --rc genhtml_branch_coverage=1 00:11:45.386 --rc genhtml_function_coverage=1 00:11:45.386 --rc genhtml_legend=1 00:11:45.386 --rc geninfo_all_blocks=1 00:11:45.386 --rc geninfo_unexecuted_blocks=1 00:11:45.386 00:11:45.386 ' 00:11:45.386 11:55:35 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.386 --rc genhtml_branch_coverage=1 00:11:45.386 --rc genhtml_function_coverage=1 00:11:45.386 --rc genhtml_legend=1 00:11:45.386 --rc geninfo_all_blocks=1 00:11:45.386 --rc geninfo_unexecuted_blocks=1 00:11:45.386 00:11:45.386 ' 00:11:45.386 11:55:35 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.386 --rc genhtml_branch_coverage=1 00:11:45.386 --rc genhtml_function_coverage=1 00:11:45.386 --rc genhtml_legend=1 00:11:45.386 --rc geninfo_all_blocks=1 00:11:45.386 --rc geninfo_unexecuted_blocks=1 00:11:45.386 00:11:45.386 ' 00:11:45.386 11:55:35 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.216 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.216 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:46.216 11:55:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:46.216 11:55:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:46.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.046 Waiting for block devices as requested 00:11:47.306 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.306 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.306 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.565 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.861 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.861 11:55:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:52.862 11:55:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.121 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:53.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.382 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:53.642 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:54.212 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.212 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68009 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:54.212 11:55:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:54.212 11:55:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.472 Initializing NVMe Controllers 00:11:54.472 Attaching to 0000:00:10.0 00:11:54.472 Attaching to 0000:00:11.0 00:11:54.472 Attached to 0000:00:10.0 00:11:54.472 Attached to 0000:00:11.0 00:11:54.472 Initialization complete. Starting I/O... 00:11:54.472 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:54.472 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:54.472 00:11:55.852 QEMU NVMe Ctrl (12340 ): 1552 I/Os completed (+1552) 00:11:55.852 QEMU NVMe Ctrl (12341 ): 1553 I/Os completed (+1553) 00:11:55.852 00:11:56.790 QEMU NVMe Ctrl (12340 ): 3676 I/Os completed (+2124) 00:11:56.790 QEMU NVMe Ctrl (12341 ): 3678 I/Os completed (+2125) 00:11:56.790 00:11:57.729 QEMU NVMe Ctrl (12340 ): 5864 I/Os completed (+2188) 00:11:57.729 QEMU NVMe Ctrl (12341 ): 5868 I/Os completed (+2190) 00:11:57.729 00:11:58.667 QEMU NVMe Ctrl (12340 ): 8060 I/Os completed (+2196) 00:11:58.667 QEMU NVMe Ctrl (12341 ): 8064 I/Os completed (+2196) 00:11:58.667 00:11:59.669 QEMU NVMe Ctrl (12340 ): 10244 I/Os completed (+2184) 00:11:59.669 QEMU NVMe Ctrl (12341 ): 10248 I/Os completed (+2184) 00:11:59.669 00:12:00.238 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.238 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.238 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.238 [2024-11-27 11:55:50.271904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.238 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:00.238 [2024-11-27 11:55:50.274202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.274302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.274329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.274353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:00.238 [2024-11-27 11:55:50.277251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.277394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.277421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.238 [2024-11-27 11:55:50.277442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.497 [2024-11-27 11:55:50.309878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.497 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:00.497 [2024-11-27 11:55:50.311426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.311471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.311499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.311519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:00.497 [2024-11-27 11:55:50.314021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.314061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.314081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 [2024-11-27 11:55:50.314097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.497 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:00.497 EAL: Scan for (pci) bus failed. 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.497 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.497 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.497 Attaching to 0000:00:10.0 00:12:00.497 Attached to 0000:00:10.0 00:12:00.757 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.757 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.757 11:55:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.757 Attaching to 0000:00:11.0 00:12:00.757 Attached to 0000:00:11.0 00:12:01.695 QEMU NVMe Ctrl (12340 ): 2128 I/Os completed (+2128) 00:12:01.695 QEMU NVMe Ctrl (12341 ): 1868 I/Os completed (+1868) 00:12:01.695 00:12:02.633 QEMU NVMe Ctrl (12340 ): 4356 I/Os completed (+2228) 00:12:02.633 QEMU NVMe Ctrl (12341 ): 4096 I/Os completed (+2228) 00:12:02.633 00:12:03.571 QEMU NVMe Ctrl (12340 ): 6584 I/Os completed (+2228) 00:12:03.571 QEMU NVMe Ctrl (12341 ): 6324 I/Os completed (+2228) 00:12:03.571 00:12:04.508 QEMU NVMe Ctrl (12340 ): 8812 I/Os completed (+2228) 00:12:04.508 QEMU NVMe Ctrl (12341 ): 8553 I/Os completed (+2229) 00:12:04.508 00:12:05.446 QEMU NVMe Ctrl (12340 ): 11044 I/Os completed (+2232) 00:12:05.447 QEMU NVMe Ctrl (12341 ): 10785 I/Os completed (+2232) 00:12:05.447 00:12:06.828 QEMU NVMe Ctrl (12340 ): 13280 I/Os completed (+2236) 00:12:06.828 QEMU NVMe Ctrl (12341 ): 13021 I/Os completed (+2236) 00:12:06.828 00:12:07.765 QEMU NVMe Ctrl (12340 ): 15508 I/Os completed (+2228) 00:12:07.765 QEMU NVMe Ctrl (12341 ): 15249 I/Os completed (+2228) 00:12:07.765 00:12:08.703 QEMU NVMe Ctrl (12340 ): 17744 I/Os completed (+2236) 00:12:08.703 QEMU NVMe Ctrl (12341 ): 17485 I/Os completed (+2236) 00:12:08.703 00:12:09.640 QEMU NVMe Ctrl (12340 ): 19972 I/Os completed (+2228) 00:12:09.640 QEMU NVMe Ctrl (12341 ): 19716 I/Os completed (+2231) 00:12:09.640 00:12:10.578 QEMU NVMe Ctrl (12340 ): 22196 I/Os completed (+2224) 00:12:10.578 QEMU NVMe Ctrl (12341 ): 21940 I/Os completed (+2224) 00:12:10.578 00:12:11.516 QEMU NVMe Ctrl (12340 ): 24432 I/Os completed (+2236) 00:12:11.516 QEMU NVMe Ctrl (12341 ): 24176 I/Os completed (+2236) 00:12:11.516 00:12:12.453 QEMU NVMe Ctrl (12340 ): 26664 I/Os completed (+2232) 00:12:12.453 QEMU NVMe Ctrl (12341 ): 26408 I/Os completed (+2232) 00:12:12.453 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.712 [2024-11-27 11:56:02.654709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:12.712 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:12.712 [2024-11-27 11:56:02.658752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.658914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.658968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.659071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:12.712 [2024-11-27 11:56:02.661905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.662110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.662161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 [2024-11-27 11:56:02.662296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.712 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.712 [2024-11-27 11:56:02.691497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:12.712 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:12.712 [2024-11-27 11:56:02.693087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.693162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.693211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.693327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:12.713 [2024-11-27 11:56:02.695854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.695893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.695914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 [2024-11-27 11:56:02.695933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.713 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:12.713 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.972 11:56:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.972 Attaching to 0000:00:10.0 00:12:12.972 Attached to 0000:00:10.0 00:12:12.972 11:56:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:13.231 11:56:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.231 11:56:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.231 Attaching to 0000:00:11.0 00:12:13.231 Attached to 0000:00:11.0 00:12:13.490 QEMU NVMe Ctrl (12340 ): 1212 I/Os completed (+1212) 00:12:13.490 QEMU NVMe Ctrl (12341 ): 960 I/Os completed (+960) 00:12:13.490 00:12:14.428 QEMU NVMe Ctrl (12340 ): 3436 I/Os completed (+2224) 00:12:14.428 QEMU NVMe Ctrl (12341 ): 3184 I/Os completed (+2224) 00:12:14.428 00:12:15.807 QEMU NVMe Ctrl (12340 ): 5664 I/Os completed (+2228) 00:12:15.807 QEMU NVMe Ctrl (12341 ): 5412 I/Os completed (+2228) 00:12:15.807 00:12:16.744 QEMU NVMe Ctrl (12340 ): 7884 I/Os completed (+2220) 00:12:16.744 QEMU NVMe Ctrl (12341 ): 7632 I/Os completed (+2220) 00:12:16.744 00:12:17.683 QEMU NVMe Ctrl (12340 ): 10120 I/Os completed (+2236) 00:12:17.683 QEMU NVMe Ctrl (12341 ): 9868 I/Os completed (+2236) 00:12:17.683 00:12:18.621 QEMU NVMe Ctrl (12340 ): 12348 I/Os completed (+2228) 00:12:18.621 QEMU NVMe Ctrl (12341 ): 12097 I/Os completed (+2229) 00:12:18.621 00:12:19.558 QEMU NVMe Ctrl (12340 ): 14576 I/Os completed (+2228) 00:12:19.558 QEMU NVMe Ctrl (12341 ): 14325 I/Os completed (+2228) 00:12:19.558 00:12:20.496 QEMU NVMe Ctrl (12340 ): 16780 I/Os completed (+2204) 00:12:20.496 QEMU NVMe Ctrl (12341 ): 16529 I/Os completed (+2204) 00:12:20.496 00:12:21.434 QEMU NVMe Ctrl (12340 ): 19012 I/Os completed (+2232) 00:12:21.434 QEMU NVMe Ctrl (12341 ): 18761 I/Os completed (+2232) 00:12:21.434 00:12:22.818 QEMU NVMe Ctrl (12340 ): 21228 I/Os completed (+2216) 00:12:22.818 QEMU NVMe Ctrl (12341 ): 20977 I/Os completed (+2216) 00:12:22.818 00:12:23.757 QEMU NVMe Ctrl (12340 ): 23456 I/Os completed (+2228) 00:12:23.757 QEMU NVMe Ctrl (12341 ): 23205 I/Os completed (+2228) 00:12:23.757 00:12:24.696 QEMU NVMe Ctrl (12340 ): 25672 I/Os completed (+2216) 00:12:24.696 QEMU NVMe Ctrl (12341 ): 25421 I/Os completed (+2216) 00:12:24.696 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.265 [2024-11-27 11:56:15.038127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:25.265 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:25.265 [2024-11-27 11:56:15.039958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.040121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.040175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.040276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:25.265 [2024-11-27 11:56:15.043109] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.043250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.043299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.043421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:25.265 [2024-11-27 11:56:15.076008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:25.265 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:25.265 [2024-11-27 11:56:15.077626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.077708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.077758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.077886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:25.265 [2024-11-27 11:56:15.080670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.080778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.080830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 [2024-11-27 11:56:15.080871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.265 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:25.265 EAL: Scan for (pci) bus failed. 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.265 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:25.265 Attaching to 0000:00:10.0 00:12:25.265 Attached to 0000:00:10.0 00:12:25.525 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:25.525 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.525 11:56:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:25.525 Attaching to 0000:00:11.0 00:12:25.525 Attached to 0000:00:11.0 00:12:25.525 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:25.525 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:25.525 [2024-11-27 11:56:15.407847] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:37.810 11:56:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:37.810 11:56:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.810 11:56:27 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.14 00:12:37.810 11:56:27 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.14 00:12:37.810 11:56:27 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:37.810 11:56:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.14 00:12:37.810 11:56:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.14 2 00:12:37.810 remove_attach_helper took 43.14s to complete (handling 2 nvme drive(s)) 11:56:27 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68009 00:12:44.384 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68009) - No such process 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68009 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:44.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68558 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:44.384 11:56:33 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68558 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68558 ']' 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.384 11:56:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.384 [2024-11-27 11:56:33.525821] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:44.384 [2024-11-27 11:56:33.526194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68558 ] 00:12:44.384 [2024-11-27 11:56:33.704463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.384 [2024-11-27 11:56:33.815022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:44.645 11:56:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:44.645 11:56:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.215 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.215 11:56:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.215 11:56:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.215 [2024-11-27 11:56:40.764675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:51.215 [2024-11-27 11:56:40.766970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.215 [2024-11-27 11:56:40.767017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.215 [2024-11-27 11:56:40.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.215 [2024-11-27 11:56:40.767066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.215 [2024-11-27 11:56:40.767078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.215 [2024-11-27 11:56:40.767093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.215 [2024-11-27 11:56:40.767106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.215 [2024-11-27 11:56:40.767120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.215 [2024-11-27 11:56:40.767133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.216 [2024-11-27 11:56:40.767154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.216 [2024-11-27 11:56:40.767166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.216 [2024-11-27 11:56:40.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.216 11:56:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.216 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:51.216 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:51.216 [2024-11-27 11:56:41.164012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:51.216 [2024-11-27 11:56:41.166364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.216 [2024-11-27 11:56:41.166416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.216 [2024-11-27 11:56:41.166451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.216 [2024-11-27 11:56:41.166470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.216 [2024-11-27 11:56:41.166484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.216 [2024-11-27 11:56:41.166495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.216 [2024-11-27 11:56:41.166510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.216 [2024-11-27 11:56:41.166521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.216 [2024-11-27 11:56:41.166535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.216 [2024-11-27 11:56:41.166548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.216 [2024-11-27 11:56:41.166561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.216 [2024-11-27 11:56:41.166583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.475 11:56:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.475 11:56:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.475 11:56:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.475 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.735 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.948 11:56:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.948 11:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.948 11:56:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.948 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.948 11:56:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.948 11:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.949 [2024-11-27 11:56:53.843613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.949 [2024-11-27 11:56:53.845947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.949 [2024-11-27 11:56:53.845987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.949 [2024-11-27 11:56:53.846004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.949 [2024-11-27 11:56:53.846038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.949 [2024-11-27 11:56:53.846053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.949 [2024-11-27 11:56:53.846068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.949 [2024-11-27 11:56:53.846081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.949 [2024-11-27 11:56:53.846095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.949 [2024-11-27 11:56:53.846107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.949 [2024-11-27 11:56:53.846123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.949 [2024-11-27 11:56:53.846134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.949 [2024-11-27 11:56:53.846148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.949 11:56:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.949 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:03.949 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:04.208 [2024-11-27 11:56:54.242941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:04.208 [2024-11-27 11:56:54.245139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.208 [2024-11-27 11:56:54.245175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.208 [2024-11-27 11:56:54.245195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.208 [2024-11-27 11:56:54.245230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.208 [2024-11-27 11:56:54.245244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.208 [2024-11-27 11:56:54.245255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.208 [2024-11-27 11:56:54.245270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.208 [2024-11-27 11:56:54.245281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.208 [2024-11-27 11:56:54.245295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.208 [2024-11-27 11:56:54.245308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.208 [2024-11-27 11:56:54.245321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.208 [2024-11-27 11:56:54.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.467 11:56:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.467 11:56:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.467 11:56:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.467 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.727 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.944 [2024-11-27 11:57:06.822718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:16.944 [2024-11-27 11:57:06.825282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.944 [2024-11-27 11:57:06.825328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.944 [2024-11-27 11:57:06.825346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.944 [2024-11-27 11:57:06.825383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.944 [2024-11-27 11:57:06.825396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.944 [2024-11-27 11:57:06.825414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.944 [2024-11-27 11:57:06.825427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.944 [2024-11-27 11:57:06.825440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.944 [2024-11-27 11:57:06.825452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.944 [2024-11-27 11:57:06.825468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.944 [2024-11-27 11:57:06.825479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.944 [2024-11-27 11:57:06.825493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.944 11:57:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:16.944 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:17.203 [2024-11-27 11:57:07.222078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:17.203 [2024-11-27 11:57:07.224322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.203 [2024-11-27 11:57:07.224384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.203 [2024-11-27 11:57:07.224404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.203 [2024-11-27 11:57:07.224441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.203 [2024-11-27 11:57:07.224456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.203 [2024-11-27 11:57:07.224468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.203 [2024-11-27 11:57:07.224484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.203 [2024-11-27 11:57:07.224495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.203 [2024-11-27 11:57:07.224513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.203 [2024-11-27 11:57:07.224526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:17.203 [2024-11-27 11:57:07.224539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.203 [2024-11-27 11:57:07.224561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:17.462 11:57:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.462 11:57:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:17.462 11:57:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:17.462 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:17.721 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:17.980 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:17.980 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:30.223 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:30.223 11:57:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:30.223 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.792 11:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.792 11:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.792 [2024-11-27 11:57:25.951476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:36.792 [2024-11-27 11:57:25.953570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:25.953615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:25.953632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:25.953657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:25.953670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:25.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:25.953698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:25.953712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:25.953724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:25.953742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:25.953753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:25.953770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 11:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:36.792 11:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:36.792 [2024-11-27 11:57:26.350824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:36.792 [2024-11-27 11:57:26.352399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:26.352445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:26.352463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:26.352497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:26.352511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:26.352524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:26.352542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:26.352553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:26.352568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 [2024-11-27 11:57:26.352582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.792 [2024-11-27 11:57:26.352606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.792 [2024-11-27 11:57:26.352618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.792 11:57:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.792 11:57:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.792 11:57:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:36.792 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:37.054 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.054 11:57:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.327 11:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.327 11:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.327 11:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.327 [2024-11-27 11:57:38.930600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:49.327 [2024-11-27 11:57:38.933143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:38.933194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:38.933212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:38.933238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:38.933250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:38.933264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:38.933277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:38.933291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:38.933303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:38.933318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:38.933329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:38.933343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.327 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.327 11:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.327 11:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.327 11:57:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.327 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:49.327 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:49.327 [2024-11-27 11:57:39.329955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:49.327 [2024-11-27 11:57:39.334471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:39.334510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:39.334530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:39.334550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:39.334567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:39.334579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:39.334595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:39.334606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:39.334620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.327 [2024-11-27 11:57:39.334634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.327 [2024-11-27 11:57:39.334650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.327 [2024-11-27 11:57:39.334662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.586 11:57:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.586 11:57:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.586 11:57:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:49.586 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:49.846 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.061 11:57:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.061 11:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.061 11:57:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.061 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.061 11:57:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.061 11:57:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.061 [2024-11-27 11:57:52.009598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:02.061 [2024-11-27 11:57:52.014412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.061 [2024-11-27 11:57:52.014460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.061 [2024-11-27 11:57:52.014479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.061 [2024-11-27 11:57:52.014505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.061 [2024-11-27 11:57:52.014517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.061 [2024-11-27 11:57:52.014532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.061 [2024-11-27 11:57:52.014544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.061 [2024-11-27 11:57:52.014562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.061 [2024-11-27 11:57:52.014573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.061 [2024-11-27 11:57:52.014589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.061 [2024-11-27 11:57:52.014600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.061 [2024-11-27 11:57:52.014614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.061 11:57:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:02.061 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:02.630 [2024-11-27 11:57:52.408942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:02.630 [2024-11-27 11:57:52.410545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.630 [2024-11-27 11:57:52.410582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.630 [2024-11-27 11:57:52.410600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.630 [2024-11-27 11:57:52.410620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.630 [2024-11-27 11:57:52.410634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.630 [2024-11-27 11:57:52.410647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.630 [2024-11-27 11:57:52.410663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.630 [2024-11-27 11:57:52.410674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.630 [2024-11-27 11:57:52.410689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.630 [2024-11-27 11:57:52.410702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.630 [2024-11-27 11:57:52.410722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.630 [2024-11-27 11:57:52.410733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.630 11:57:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.630 11:57:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.630 11:57:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:02.630 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:02.889 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.12 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.12 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.12 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.12 2 00:14:15.103 remove_attach_helper took 45.12s to complete (handling 2 nvme drive(s)) 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:15.103 11:58:04 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68558 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68558 ']' 00:14:15.103 11:58:04 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68558 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68558 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.103 killing process with pid 68558 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68558' 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68558 00:14:15.103 11:58:05 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68558 00:14:17.640 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:17.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:18.469 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.469 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.728 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:18.728 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:18.728 00:14:18.728 real 2m33.696s 00:14:18.728 user 1m51.064s 00:14:18.728 sys 0m22.876s 00:14:18.728 11:58:08 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.728 11:58:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.728 ************************************ 00:14:18.728 END TEST sw_hotplug 00:14:18.728 ************************************ 00:14:18.728 11:58:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:18.728 11:58:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:18.728 11:58:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.728 11:58:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.728 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:14:18.728 ************************************ 00:14:18.728 START TEST nvme_xnvme 00:14:18.728 ************************************ 00:14:18.728 11:58:08 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:18.989 * Looking for test storage... 00:14:18.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.989 11:58:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.989 --rc genhtml_branch_coverage=1 00:14:18.989 --rc genhtml_function_coverage=1 00:14:18.989 --rc genhtml_legend=1 00:14:18.989 --rc geninfo_all_blocks=1 00:14:18.989 --rc geninfo_unexecuted_blocks=1 00:14:18.989 00:14:18.989 ' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.989 --rc genhtml_branch_coverage=1 00:14:18.989 --rc genhtml_function_coverage=1 00:14:18.989 --rc genhtml_legend=1 00:14:18.989 --rc geninfo_all_blocks=1 00:14:18.989 --rc geninfo_unexecuted_blocks=1 00:14:18.989 00:14:18.989 ' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.989 --rc genhtml_branch_coverage=1 00:14:18.989 --rc genhtml_function_coverage=1 00:14:18.989 --rc genhtml_legend=1 00:14:18.989 --rc geninfo_all_blocks=1 00:14:18.989 --rc geninfo_unexecuted_blocks=1 00:14:18.989 00:14:18.989 ' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:18.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.989 --rc genhtml_branch_coverage=1 00:14:18.989 --rc genhtml_function_coverage=1 00:14:18.989 --rc genhtml_legend=1 00:14:18.989 --rc geninfo_all_blocks=1 00:14:18.989 --rc geninfo_unexecuted_blocks=1 00:14:18.989 00:14:18.989 ' 00:14:18.989 11:58:08 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:18.989 11:58:08 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:18.989 11:58:08 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:18.989 11:58:08 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:18.990 11:58:08 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:18.990 11:58:09 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:18.990 11:58:09 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:18.990 11:58:09 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:18.990 #define SPDK_CONFIG_H 00:14:18.990 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:18.990 #define SPDK_CONFIG_APPS 1 00:14:18.990 #define SPDK_CONFIG_ARCH native 00:14:18.990 #define SPDK_CONFIG_ASAN 1 00:14:18.990 #undef SPDK_CONFIG_AVAHI 00:14:18.990 #undef SPDK_CONFIG_CET 00:14:18.990 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:18.990 #define SPDK_CONFIG_COVERAGE 1 00:14:18.990 #define SPDK_CONFIG_CROSS_PREFIX 00:14:18.990 #undef SPDK_CONFIG_CRYPTO 00:14:18.990 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:18.990 #undef SPDK_CONFIG_CUSTOMOCF 00:14:18.990 #undef SPDK_CONFIG_DAOS 00:14:18.990 #define SPDK_CONFIG_DAOS_DIR 00:14:18.990 #define SPDK_CONFIG_DEBUG 1 00:14:18.990 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:18.990 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:18.990 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:18.990 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:18.990 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:18.990 #undef SPDK_CONFIG_DPDK_UADK 00:14:18.990 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:18.990 #define SPDK_CONFIG_EXAMPLES 1 00:14:18.990 #undef SPDK_CONFIG_FC 00:14:18.990 #define SPDK_CONFIG_FC_PATH 00:14:18.990 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:18.990 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:18.990 #define SPDK_CONFIG_FSDEV 1 00:14:18.990 #undef SPDK_CONFIG_FUSE 00:14:18.990 #undef SPDK_CONFIG_FUZZER 00:14:18.990 #define SPDK_CONFIG_FUZZER_LIB 00:14:18.990 #undef SPDK_CONFIG_GOLANG 00:14:18.990 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:18.990 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:18.990 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:18.990 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:18.990 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:18.990 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:18.990 #undef SPDK_CONFIG_HAVE_LZ4 00:14:18.990 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:18.990 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:18.990 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:18.990 #define SPDK_CONFIG_IDXD 1 00:14:18.990 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:18.990 #undef SPDK_CONFIG_IPSEC_MB 00:14:18.990 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:18.990 #define SPDK_CONFIG_ISAL 1 00:14:18.990 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:18.990 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:18.990 #define SPDK_CONFIG_LIBDIR 00:14:18.990 #undef SPDK_CONFIG_LTO 00:14:18.990 #define SPDK_CONFIG_MAX_LCORES 128 00:14:18.990 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:18.990 #define SPDK_CONFIG_NVME_CUSE 1 00:14:18.990 #undef SPDK_CONFIG_OCF 00:14:18.990 #define SPDK_CONFIG_OCF_PATH 00:14:18.990 #define SPDK_CONFIG_OPENSSL_PATH 00:14:18.990 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:18.990 #define SPDK_CONFIG_PGO_DIR 00:14:18.990 #undef SPDK_CONFIG_PGO_USE 00:14:18.990 #define SPDK_CONFIG_PREFIX /usr/local 00:14:18.990 #undef SPDK_CONFIG_RAID5F 00:14:18.990 #undef SPDK_CONFIG_RBD 00:14:18.990 #define SPDK_CONFIG_RDMA 1 00:14:18.990 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:18.990 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:18.990 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:18.990 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:18.990 #define SPDK_CONFIG_SHARED 1 00:14:18.990 #undef SPDK_CONFIG_SMA 00:14:18.990 #define SPDK_CONFIG_TESTS 1 00:14:18.990 #undef SPDK_CONFIG_TSAN 00:14:18.991 #define SPDK_CONFIG_UBLK 1 00:14:18.991 #define SPDK_CONFIG_UBSAN 1 00:14:18.991 #undef SPDK_CONFIG_UNIT_TESTS 00:14:18.991 #undef SPDK_CONFIG_URING 00:14:18.991 #define SPDK_CONFIG_URING_PATH 00:14:18.991 #undef SPDK_CONFIG_URING_ZNS 00:14:18.991 #undef SPDK_CONFIG_USDT 00:14:18.991 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:18.991 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:18.991 #undef SPDK_CONFIG_VFIO_USER 00:14:18.991 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:18.991 #define SPDK_CONFIG_VHOST 1 00:14:18.991 #define SPDK_CONFIG_VIRTIO 1 00:14:18.991 #undef SPDK_CONFIG_VTUNE 00:14:18.991 #define SPDK_CONFIG_VTUNE_DIR 00:14:18.991 #define SPDK_CONFIG_WERROR 1 00:14:18.991 #define SPDK_CONFIG_WPDK_DIR 00:14:18.991 #define SPDK_CONFIG_XNVME 1 00:14:18.991 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:18.991 11:58:09 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:18.991 11:58:09 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.991 11:58:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.991 11:58:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.991 11:58:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.991 11:58:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.991 11:58:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.991 11:58:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.991 11:58:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.991 11:58:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:18.991 11:58:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.991 11:58:09 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:18.991 11:58:09 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:18.991 11:58:09 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:19.253 11:58:09 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:19.253 11:58:09 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69905 ]] 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69905 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:19.254 11:58:09 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.gDKyC5 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.gDKyC5/tests/xnvme /tmp/spdk.gDKyC5 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975068672 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593210880 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975068672 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593210880 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97951813632 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1750966272 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:19.255 * Looking for test storage... 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975068672 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:19.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.255 11:58:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.255 11:58:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.255 11:58:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.255 11:58:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.255 11:58:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.255 11:58:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:19.256 11:58:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.256 11:58:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.256 --rc genhtml_branch_coverage=1 00:14:19.256 --rc genhtml_function_coverage=1 00:14:19.256 --rc genhtml_legend=1 00:14:19.256 --rc geninfo_all_blocks=1 00:14:19.256 --rc geninfo_unexecuted_blocks=1 00:14:19.256 00:14:19.256 ' 00:14:19.256 11:58:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.256 --rc genhtml_branch_coverage=1 00:14:19.256 --rc genhtml_function_coverage=1 00:14:19.256 --rc genhtml_legend=1 00:14:19.256 --rc geninfo_all_blocks=1 00:14:19.256 --rc geninfo_unexecuted_blocks=1 00:14:19.256 00:14:19.256 ' 00:14:19.256 11:58:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.256 --rc genhtml_branch_coverage=1 00:14:19.256 --rc genhtml_function_coverage=1 00:14:19.256 --rc genhtml_legend=1 00:14:19.256 --rc geninfo_all_blocks=1 00:14:19.256 --rc geninfo_unexecuted_blocks=1 00:14:19.256 00:14:19.256 ' 00:14:19.256 11:58:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.256 --rc genhtml_branch_coverage=1 00:14:19.256 --rc genhtml_function_coverage=1 00:14:19.256 --rc genhtml_legend=1 00:14:19.256 --rc geninfo_all_blocks=1 00:14:19.256 --rc geninfo_unexecuted_blocks=1 00:14:19.256 00:14:19.256 ' 00:14:19.256 11:58:09 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.256 11:58:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.256 11:58:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.256 11:58:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.256 11:58:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.256 11:58:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:19.256 11:58:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:19.256 11:58:09 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:19.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:20.086 Waiting for block devices as requested 00:14:20.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.345 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.605 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.884 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:25.884 11:58:15 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:26.143 11:58:15 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:26.143 11:58:15 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:26.403 11:58:16 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:26.403 No valid GPT data, bailing 00:14:26.403 11:58:16 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:26.403 11:58:16 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:26.403 11:58:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:26.403 11:58:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.403 11:58:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.403 11:58:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.403 ************************************ 00:14:26.403 START TEST xnvme_rpc 00:14:26.403 ************************************ 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70307 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70307 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70307 ']' 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.403 11:58:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.403 [2024-11-27 11:58:16.393177] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:26.403 [2024-11-27 11:58:16.393311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70307 ] 00:14:26.663 [2024-11-27 11:58:16.575202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.663 [2024-11-27 11:58:16.679550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.602 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.603 xnvme_bdev 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.603 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70307 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70307 ']' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70307 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70307 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.862 killing process with pid 70307 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70307' 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70307 00:14:27.862 11:58:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70307 00:14:30.402 00:14:30.402 real 0m4.049s 00:14:30.402 user 0m3.984s 00:14:30.402 sys 0m0.607s 00:14:30.402 11:58:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.402 11:58:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.402 ************************************ 00:14:30.402 END TEST xnvme_rpc 00:14:30.402 ************************************ 00:14:30.402 11:58:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:30.402 11:58:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.402 11:58:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.402 11:58:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.402 ************************************ 00:14:30.402 START TEST xnvme_bdevperf 00:14:30.402 ************************************ 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:30.402 11:58:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.662 { 00:14:30.662 "subsystems": [ 00:14:30.662 { 00:14:30.662 "subsystem": "bdev", 00:14:30.662 "config": [ 00:14:30.662 { 00:14:30.662 "params": { 00:14:30.662 "io_mechanism": "libaio", 00:14:30.662 "conserve_cpu": false, 00:14:30.662 "filename": "/dev/nvme0n1", 00:14:30.662 "name": "xnvme_bdev" 00:14:30.662 }, 00:14:30.662 "method": "bdev_xnvme_create" 00:14:30.662 }, 00:14:30.662 { 00:14:30.662 "method": "bdev_wait_for_examine" 00:14:30.662 } 00:14:30.662 ] 00:14:30.662 } 00:14:30.662 ] 00:14:30.662 } 00:14:30.662 [2024-11-27 11:58:20.504802] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:30.662 [2024-11-27 11:58:20.504931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70388 ] 00:14:30.662 [2024-11-27 11:58:20.685361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.922 [2024-11-27 11:58:20.802846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.181 Running I/O for 5 seconds... 00:14:33.121 42479.00 IOPS, 165.93 MiB/s [2024-11-27T11:58:24.551Z] 42650.50 IOPS, 166.60 MiB/s [2024-11-27T11:58:25.485Z] 42737.00 IOPS, 166.94 MiB/s [2024-11-27T11:58:26.421Z] 41091.00 IOPS, 160.51 MiB/s 00:14:36.368 Latency(us) 00:14:36.368 [2024-11-27T11:58:26.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.368 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:36.368 xnvme_bdev : 5.00 41556.76 162.33 0.00 0.00 1536.66 165.32 10580.51 00:14:36.368 [2024-11-27T11:58:26.421Z] =================================================================================================================== 00:14:36.368 [2024-11-27T11:58:26.421Z] Total : 41556.76 162.33 0.00 0.00 1536.66 165.32 10580.51 00:14:37.305 11:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.305 11:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:37.305 11:58:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:37.305 11:58:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:37.305 11:58:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:37.305 { 00:14:37.305 "subsystems": [ 00:14:37.305 { 00:14:37.305 "subsystem": "bdev", 00:14:37.305 "config": [ 00:14:37.305 { 00:14:37.305 "params": { 00:14:37.305 "io_mechanism": "libaio", 00:14:37.305 "conserve_cpu": false, 00:14:37.305 "filename": "/dev/nvme0n1", 00:14:37.305 "name": "xnvme_bdev" 00:14:37.305 }, 00:14:37.305 "method": "bdev_xnvme_create" 00:14:37.305 }, 00:14:37.305 { 00:14:37.305 "method": "bdev_wait_for_examine" 00:14:37.305 } 00:14:37.305 ] 00:14:37.305 } 00:14:37.305 ] 00:14:37.305 } 00:14:37.305 [2024-11-27 11:58:27.333958] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:37.305 [2024-11-27 11:58:27.334083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70463 ] 00:14:37.563 [2024-11-27 11:58:27.512431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.821 [2024-11-27 11:58:27.620543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.079 Running I/O for 5 seconds... 00:14:39.946 44648.00 IOPS, 174.41 MiB/s [2024-11-27T11:58:31.377Z] 43272.00 IOPS, 169.03 MiB/s [2024-11-27T11:58:32.313Z] 43299.33 IOPS, 169.14 MiB/s [2024-11-27T11:58:33.250Z] 43307.00 IOPS, 169.17 MiB/s [2024-11-27T11:58:33.250Z] 43362.40 IOPS, 169.38 MiB/s 00:14:43.197 Latency(us) 00:14:43.197 [2024-11-27T11:58:33.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.197 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:43.197 xnvme_bdev : 5.00 43337.12 169.29 0.00 0.00 1473.34 254.97 4816.50 00:14:43.197 [2024-11-27T11:58:33.250Z] =================================================================================================================== 00:14:43.197 [2024-11-27T11:58:33.250Z] Total : 43337.12 169.29 0.00 0.00 1473.34 254.97 4816.50 00:14:44.137 00:14:44.137 real 0m13.670s 00:14:44.137 user 0m4.848s 00:14:44.137 sys 0m5.869s 00:14:44.137 11:58:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.137 11:58:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.137 ************************************ 00:14:44.137 END TEST xnvme_bdevperf 00:14:44.137 ************************************ 00:14:44.137 11:58:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:44.137 11:58:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.137 11:58:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.137 11:58:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.137 ************************************ 00:14:44.137 START TEST xnvme_fio_plugin 00:14:44.137 ************************************ 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.137 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.396 { 00:14:44.396 "subsystems": [ 00:14:44.396 { 00:14:44.396 "subsystem": "bdev", 00:14:44.396 "config": [ 00:14:44.396 { 00:14:44.396 "params": { 00:14:44.396 "io_mechanism": "libaio", 00:14:44.396 "conserve_cpu": false, 00:14:44.396 "filename": "/dev/nvme0n1", 00:14:44.396 "name": "xnvme_bdev" 00:14:44.396 }, 00:14:44.396 "method": "bdev_xnvme_create" 00:14:44.396 }, 00:14:44.396 { 00:14:44.396 "method": "bdev_wait_for_examine" 00:14:44.396 } 00:14:44.396 ] 00:14:44.396 } 00:14:44.396 ] 00:14:44.396 } 00:14:44.396 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.396 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.396 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.396 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.396 11:58:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.396 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:44.396 fio-3.35 00:14:44.396 Starting 1 thread 00:14:50.963 00:14:50.963 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70588: Wed Nov 27 11:58:40 2024 00:14:50.963 read: IOPS=56.1k, BW=219MiB/s (230MB/s)(1096MiB/5001msec) 00:14:50.963 slat (usec): min=4, max=843, avg=15.65, stdev=27.77 00:14:50.963 clat (usec): min=64, max=5999, avg=688.65, stdev=413.04 00:14:50.963 lat (usec): min=86, max=6091, avg=704.29, stdev=414.25 00:14:50.963 clat percentiles (usec): 00:14:50.963 | 1.00th=[ 145], 5.00th=[ 233], 10.00th=[ 293], 20.00th=[ 388], 00:14:50.963 | 30.00th=[ 465], 40.00th=[ 537], 50.00th=[ 619], 60.00th=[ 701], 00:14:50.964 | 70.00th=[ 791], 80.00th=[ 914], 90.00th=[ 1123], 95.00th=[ 1336], 00:14:50.964 | 99.00th=[ 2311], 99.50th=[ 2900], 99.90th=[ 3884], 99.95th=[ 4228], 00:14:50.964 | 99.99th=[ 4752] 00:14:50.964 bw ( KiB/s): min=179376, max=292688, per=99.30%, avg=222827.56, stdev=34989.19, samples=9 00:14:50.964 iops : min=44844, max=73172, avg=55706.89, stdev=8747.30, samples=9 00:14:50.964 lat (usec) : 100=0.16%, 250=6.08%, 500=28.56%, 750=31.16%, 1000=18.91% 00:14:50.964 lat (msec) : 2=13.65%, 4=1.41%, 10=0.08% 00:14:50.964 cpu : usr=25.72%, sys=57.68%, ctx=38, majf=0, minf=764 00:14:50.964 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=9.4%, 16=25.1%, 32=59.6%, >=64=2.0% 00:14:50.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.964 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:14:50.964 issued rwts: total=280560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.964 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:50.964 00:14:50.964 Run status group 0 (all jobs): 00:14:50.964 READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=1096MiB (1149MB), run=5001-5001msec 00:14:51.532 ----------------------------------------------------- 00:14:51.532 Suppressions used: 00:14:51.532 count bytes template 00:14:51.532 1 11 /usr/src/fio/parse.c 00:14:51.532 1 8 libtcmalloc_minimal.so 00:14:51.532 1 904 libcrypto.so 00:14:51.532 ----------------------------------------------------- 00:14:51.532 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.532 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.792 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.792 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.792 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.792 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.792 11:58:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.792 { 00:14:51.792 "subsystems": [ 00:14:51.792 { 00:14:51.792 "subsystem": "bdev", 00:14:51.792 "config": [ 00:14:51.792 { 00:14:51.792 "params": { 00:14:51.792 "io_mechanism": "libaio", 00:14:51.792 "conserve_cpu": false, 00:14:51.792 "filename": "/dev/nvme0n1", 00:14:51.792 "name": "xnvme_bdev" 00:14:51.792 }, 00:14:51.792 "method": "bdev_xnvme_create" 00:14:51.792 }, 00:14:51.792 { 00:14:51.792 "method": "bdev_wait_for_examine" 00:14:51.792 } 00:14:51.792 ] 00:14:51.792 } 00:14:51.792 ] 00:14:51.792 } 00:14:51.792 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.792 fio-3.35 00:14:51.792 Starting 1 thread 00:14:58.368 00:14:58.368 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70693: Wed Nov 27 11:58:47 2024 00:14:58.368 write: IOPS=43.5k, BW=170MiB/s (178MB/s)(849MiB/5001msec); 0 zone resets 00:14:58.368 slat (usec): min=4, max=1011, avg=20.16, stdev=35.01 00:14:58.368 clat (usec): min=78, max=6206, avg=878.59, stdev=567.14 00:14:58.368 lat (usec): min=140, max=6264, avg=898.76, stdev=571.69 00:14:58.368 clat percentiles (usec): 00:14:58.368 | 1.00th=[ 188], 5.00th=[ 281], 10.00th=[ 351], 20.00th=[ 465], 00:14:58.368 | 30.00th=[ 562], 40.00th=[ 652], 50.00th=[ 750], 60.00th=[ 857], 00:14:58.368 | 70.00th=[ 996], 80.00th=[ 1172], 90.00th=[ 1516], 95.00th=[ 1909], 00:14:58.368 | 99.00th=[ 3130], 99.50th=[ 3654], 99.90th=[ 4686], 99.95th=[ 4948], 00:14:58.368 | 99.99th=[ 5538] 00:14:58.368 bw ( KiB/s): min=126544, max=239424, per=96.99%, avg=168569.00, stdev=41059.41, samples=9 00:14:58.368 iops : min=31636, max=59856, avg=42142.22, stdev=10264.87, samples=9 00:14:58.368 lat (usec) : 100=0.07%, 250=3.25%, 500=20.23%, 750=26.72%, 1000=20.14% 00:14:58.368 lat (msec) : 2=25.21%, 4=4.10%, 10=0.29% 00:14:58.368 cpu : usr=25.48%, sys=56.58%, ctx=54, majf=0, minf=764 00:14:58.368 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.4%, 16=25.3%, 32=57.7%, >=64=1.9% 00:14:58.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.368 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:58.368 issued rwts: total=0,217305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.369 00:14:58.369 Run status group 0 (all jobs): 00:14:58.369 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=849MiB (890MB), run=5001-5001msec 00:14:59.312 ----------------------------------------------------- 00:14:59.312 Suppressions used: 00:14:59.313 count bytes template 00:14:59.313 1 11 /usr/src/fio/parse.c 00:14:59.313 1 8 libtcmalloc_minimal.so 00:14:59.313 1 904 libcrypto.so 00:14:59.313 ----------------------------------------------------- 00:14:59.313 00:14:59.313 00:14:59.313 real 0m14.921s 00:14:59.313 user 0m6.304s 00:14:59.313 sys 0m6.564s 00:14:59.313 ************************************ 00:14:59.313 END TEST xnvme_fio_plugin 00:14:59.313 ************************************ 00:14:59.313 11:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.313 11:58:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.313 11:58:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:59.313 11:58:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:59.313 11:58:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:59.313 11:58:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:59.313 11:58:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.313 11:58:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.313 11:58:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.313 ************************************ 00:14:59.313 START TEST xnvme_rpc 00:14:59.313 ************************************ 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70780 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70780 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70780 ']' 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.313 11:58:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.313 [2024-11-27 11:58:49.255958] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:59.314 [2024-11-27 11:58:49.256079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:14:59.578 [2024-11-27 11:58:49.437791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.578 [2024-11-27 11:58:49.575793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.953 xnvme_bdev 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.953 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70780 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70780 ']' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70780 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70780 00:15:00.954 killing process with pid 70780 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70780' 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70780 00:15:00.954 11:58:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70780 00:15:03.489 ************************************ 00:15:03.489 END TEST xnvme_rpc 00:15:03.489 ************************************ 00:15:03.489 00:15:03.489 real 0m4.266s 00:15:03.489 user 0m4.160s 00:15:03.489 sys 0m0.691s 00:15:03.489 11:58:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.489 11:58:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.489 11:58:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:03.489 11:58:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.489 11:58:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.489 11:58:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.489 ************************************ 00:15:03.489 START TEST xnvme_bdevperf 00:15:03.489 ************************************ 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:03.489 11:58:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.748 { 00:15:03.748 "subsystems": [ 00:15:03.748 { 00:15:03.748 "subsystem": "bdev", 00:15:03.748 "config": [ 00:15:03.748 { 00:15:03.748 "params": { 00:15:03.748 "io_mechanism": "libaio", 00:15:03.748 "conserve_cpu": true, 00:15:03.748 "filename": "/dev/nvme0n1", 00:15:03.748 "name": "xnvme_bdev" 00:15:03.748 }, 00:15:03.748 "method": "bdev_xnvme_create" 00:15:03.748 }, 00:15:03.748 { 00:15:03.748 "method": "bdev_wait_for_examine" 00:15:03.748 } 00:15:03.748 ] 00:15:03.748 } 00:15:03.748 ] 00:15:03.748 } 00:15:03.748 [2024-11-27 11:58:53.589172] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:03.748 [2024-11-27 11:58:53.589477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70871 ] 00:15:03.748 [2024-11-27 11:58:53.768953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.008 [2024-11-27 11:58:53.894521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.575 Running I/O for 5 seconds... 00:15:06.445 40598.00 IOPS, 158.59 MiB/s [2024-11-27T11:58:57.434Z] 40742.50 IOPS, 159.15 MiB/s [2024-11-27T11:58:58.477Z] 40987.00 IOPS, 160.11 MiB/s [2024-11-27T11:58:59.423Z] 41340.00 IOPS, 161.48 MiB/s 00:15:09.370 Latency(us) 00:15:09.370 [2024-11-27T11:58:59.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.370 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:09.370 xnvme_bdev : 5.00 41598.27 162.49 0.00 0.00 1535.43 330.64 9264.53 00:15:09.370 [2024-11-27T11:58:59.423Z] =================================================================================================================== 00:15:09.370 [2024-11-27T11:58:59.423Z] Total : 41598.27 162.49 0.00 0.00 1535.43 330.64 9264.53 00:15:10.748 11:59:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.748 11:59:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:10.748 11:59:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:10.748 11:59:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:10.748 11:59:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:10.748 { 00:15:10.748 "subsystems": [ 00:15:10.748 { 00:15:10.748 "subsystem": "bdev", 00:15:10.748 "config": [ 00:15:10.748 { 00:15:10.748 "params": { 00:15:10.748 "io_mechanism": "libaio", 00:15:10.748 "conserve_cpu": true, 00:15:10.748 "filename": "/dev/nvme0n1", 00:15:10.748 "name": "xnvme_bdev" 00:15:10.748 }, 00:15:10.748 "method": "bdev_xnvme_create" 00:15:10.748 }, 00:15:10.748 { 00:15:10.748 "method": "bdev_wait_for_examine" 00:15:10.748 } 00:15:10.748 ] 00:15:10.748 } 00:15:10.748 ] 00:15:10.748 } 00:15:10.748 [2024-11-27 11:59:00.667213] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:10.748 [2024-11-27 11:59:00.667590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:15:11.007 [2024-11-27 11:59:00.846282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.007 [2024-11-27 11:59:00.978388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.574 Running I/O for 5 seconds... 00:15:13.446 49231.00 IOPS, 192.31 MiB/s [2024-11-27T11:59:04.440Z] 49790.50 IOPS, 194.49 MiB/s [2024-11-27T11:59:05.818Z] 49928.67 IOPS, 195.03 MiB/s [2024-11-27T11:59:06.754Z] 49525.50 IOPS, 193.46 MiB/s [2024-11-27T11:59:06.754Z] 49400.60 IOPS, 192.97 MiB/s 00:15:16.701 Latency(us) 00:15:16.701 [2024-11-27T11:59:06.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.701 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:16.701 xnvme_bdev : 5.00 49373.33 192.86 0.00 0.00 1293.47 180.95 9001.33 00:15:16.701 [2024-11-27T11:59:06.754Z] =================================================================================================================== 00:15:16.702 [2024-11-27T11:59:06.755Z] Total : 49373.33 192.86 0.00 0.00 1293.47 180.95 9001.33 00:15:17.637 00:15:17.637 real 0m14.170s 00:15:17.637 user 0m5.118s 00:15:17.637 sys 0m7.092s 00:15:17.637 11:59:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.637 11:59:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:17.637 ************************************ 00:15:17.637 END TEST xnvme_bdevperf 00:15:17.637 ************************************ 00:15:17.895 11:59:07 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:17.895 11:59:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:17.895 11:59:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.895 11:59:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.895 ************************************ 00:15:17.895 START TEST xnvme_fio_plugin 00:15:17.895 ************************************ 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.895 11:59:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:17.896 11:59:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:17.896 { 00:15:17.896 "subsystems": [ 00:15:17.896 { 00:15:17.896 "subsystem": "bdev", 00:15:17.896 "config": [ 00:15:17.896 { 00:15:17.896 "params": { 00:15:17.896 "io_mechanism": "libaio", 00:15:17.896 "conserve_cpu": true, 00:15:17.896 "filename": "/dev/nvme0n1", 00:15:17.896 "name": "xnvme_bdev" 00:15:17.896 }, 00:15:17.896 "method": "bdev_xnvme_create" 00:15:17.896 }, 00:15:17.896 { 00:15:17.896 "method": "bdev_wait_for_examine" 00:15:17.896 } 00:15:17.896 ] 00:15:17.896 } 00:15:17.896 ] 00:15:17.896 } 00:15:18.154 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:18.154 fio-3.35 00:15:18.154 Starting 1 thread 00:15:24.722 00:15:24.723 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71096: Wed Nov 27 11:59:13 2024 00:15:24.723 read: IOPS=41.6k, BW=163MiB/s (171MB/s)(814MiB/5001msec) 00:15:24.723 slat (usec): min=4, max=1020, avg=21.13, stdev=37.06 00:15:24.723 clat (usec): min=57, max=7493, avg=910.16, stdev=601.76 00:15:24.723 lat (usec): min=71, max=7624, avg=931.28, stdev=607.02 00:15:24.723 clat percentiles (usec): 00:15:24.723 | 1.00th=[ 186], 5.00th=[ 285], 10.00th=[ 363], 20.00th=[ 478], 00:15:24.723 | 30.00th=[ 578], 40.00th=[ 668], 50.00th=[ 775], 60.00th=[ 898], 00:15:24.723 | 70.00th=[ 1037], 80.00th=[ 1221], 90.00th=[ 1549], 95.00th=[ 1942], 00:15:24.723 | 99.00th=[ 3392], 99.50th=[ 3982], 99.90th=[ 4948], 99.95th=[ 5342], 00:15:24.723 | 99.99th=[ 6259] 00:15:24.723 bw ( KiB/s): min=125048, max=214880, per=95.05%, avg=158336.00, stdev=31198.41, samples=9 00:15:24.723 iops : min=31262, max=53720, avg=39584.00, stdev=7799.60, samples=9 00:15:24.723 lat (usec) : 100=0.06%, 250=3.15%, 500=19.02%, 750=25.79%, 1000=19.58% 00:15:24.723 lat (msec) : 2=27.81%, 4=4.10%, 10=0.48% 00:15:24.723 cpu : usr=24.24%, sys=55.96%, ctx=343, majf=0, minf=764 00:15:24.723 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.4%, 16=25.2%, 32=57.8%, >=64=1.9% 00:15:24.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.723 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:24.723 issued rwts: total=208264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.723 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:24.723 00:15:24.723 Run status group 0 (all jobs): 00:15:24.723 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=814MiB (853MB), run=5001-5001msec 00:15:25.292 ----------------------------------------------------- 00:15:25.292 Suppressions used: 00:15:25.292 count bytes template 00:15:25.292 1 11 /usr/src/fio/parse.c 00:15:25.292 1 8 libtcmalloc_minimal.so 00:15:25.292 1 904 libcrypto.so 00:15:25.292 ----------------------------------------------------- 00:15:25.292 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:25.292 11:59:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:25.552 { 00:15:25.552 "subsystems": [ 00:15:25.552 { 00:15:25.552 "subsystem": "bdev", 00:15:25.552 "config": [ 00:15:25.552 { 00:15:25.552 "params": { 00:15:25.552 "io_mechanism": "libaio", 00:15:25.552 "conserve_cpu": true, 00:15:25.552 "filename": "/dev/nvme0n1", 00:15:25.552 "name": "xnvme_bdev" 00:15:25.552 }, 00:15:25.552 "method": "bdev_xnvme_create" 00:15:25.552 }, 00:15:25.552 { 00:15:25.552 "method": "bdev_wait_for_examine" 00:15:25.552 } 00:15:25.552 ] 00:15:25.552 } 00:15:25.552 ] 00:15:25.552 } 00:15:25.552 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:25.552 fio-3.35 00:15:25.552 Starting 1 thread 00:15:32.120 00:15:32.120 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71194: Wed Nov 27 11:59:21 2024 00:15:32.120 write: IOPS=72.9k, BW=285MiB/s (299MB/s)(1424MiB/5001msec); 0 zone resets 00:15:32.120 slat (usec): min=4, max=2454, avg=11.52, stdev=32.34 00:15:32.120 clat (usec): min=78, max=4957, avg=578.92, stdev=228.87 00:15:32.120 lat (usec): min=139, max=5064, avg=590.45, stdev=225.79 00:15:32.120 clat percentiles (usec): 00:15:32.120 | 1.00th=[ 163], 5.00th=[ 293], 10.00th=[ 351], 20.00th=[ 424], 00:15:32.120 | 30.00th=[ 474], 40.00th=[ 515], 50.00th=[ 562], 60.00th=[ 603], 00:15:32.120 | 70.00th=[ 652], 80.00th=[ 717], 90.00th=[ 799], 95.00th=[ 881], 00:15:32.120 | 99.00th=[ 1270], 99.50th=[ 1582], 99.90th=[ 2999], 99.95th=[ 3556], 00:15:32.120 | 99.99th=[ 4228] 00:15:32.120 bw ( KiB/s): min=236464, max=307616, per=100.00%, avg=293982.22, stdev=23250.35, samples=9 00:15:32.120 iops : min=59116, max=76904, avg=73495.56, stdev=5812.59, samples=9 00:15:32.120 lat (usec) : 100=0.07%, 250=2.76%, 500=33.45%, 750=48.80%, 1000=12.52% 00:15:32.120 lat (msec) : 2=2.13%, 4=0.26%, 10=0.02% 00:15:32.120 cpu : usr=39.72%, sys=51.20%, ctx=14, majf=0, minf=764 00:15:32.120 IO depths : 1=0.2%, 2=0.5%, 4=1.9%, 8=6.5%, 16=21.7%, 32=66.8%, >=64=2.4% 00:15:32.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.120 complete : 0=0.0%, 4=97.7%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.6%, >=64=0.0% 00:15:32.120 issued rwts: total=0,364533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:32.120 00:15:32.120 Run status group 0 (all jobs): 00:15:32.120 WRITE: bw=285MiB/s (299MB/s), 285MiB/s-285MiB/s (299MB/s-299MB/s), io=1424MiB (1493MB), run=5001-5001msec 00:15:33.058 ----------------------------------------------------- 00:15:33.058 Suppressions used: 00:15:33.058 count bytes template 00:15:33.058 1 11 /usr/src/fio/parse.c 00:15:33.058 1 8 libtcmalloc_minimal.so 00:15:33.058 1 904 libcrypto.so 00:15:33.058 ----------------------------------------------------- 00:15:33.058 00:15:33.058 00:15:33.058 real 0m15.061s 00:15:33.058 user 0m7.032s 00:15:33.058 sys 0m6.239s 00:15:33.058 11:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.058 11:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:33.058 ************************************ 00:15:33.058 END TEST xnvme_fio_plugin 00:15:33.058 ************************************ 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:33.059 11:59:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:33.059 11:59:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.059 11:59:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.059 11:59:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.059 ************************************ 00:15:33.059 START TEST xnvme_rpc 00:15:33.059 ************************************ 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71282 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71282 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71282 ']' 00:15:33.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.059 11:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.059 [2024-11-27 11:59:23.002719] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:33.059 [2024-11-27 11:59:23.003067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71282 ] 00:15:33.318 [2024-11-27 11:59:23.189743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.318 [2024-11-27 11:59:23.327465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 xnvme_bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71282 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71282 ']' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71282 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71282 00:15:34.699 killing process with pid 71282 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71282' 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71282 00:15:34.699 11:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71282 00:15:37.237 00:15:37.237 real 0m4.190s 00:15:37.237 user 0m4.079s 00:15:37.237 sys 0m0.711s 00:15:37.237 11:59:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.237 ************************************ 00:15:37.237 END TEST xnvme_rpc 00:15:37.237 ************************************ 00:15:37.237 11:59:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 11:59:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:37.237 11:59:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.237 11:59:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.237 11:59:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 ************************************ 00:15:37.237 START TEST xnvme_bdevperf 00:15:37.237 ************************************ 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:37.237 11:59:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.237 { 00:15:37.237 "subsystems": [ 00:15:37.237 { 00:15:37.237 "subsystem": "bdev", 00:15:37.237 "config": [ 00:15:37.237 { 00:15:37.237 "params": { 00:15:37.237 "io_mechanism": "io_uring", 00:15:37.237 "conserve_cpu": false, 00:15:37.237 "filename": "/dev/nvme0n1", 00:15:37.237 "name": "xnvme_bdev" 00:15:37.237 }, 00:15:37.237 "method": "bdev_xnvme_create" 00:15:37.237 }, 00:15:37.237 { 00:15:37.237 "method": "bdev_wait_for_examine" 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 } 00:15:37.237 ] 00:15:37.237 } 00:15:37.237 [2024-11-27 11:59:27.262319] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:37.237 [2024-11-27 11:59:27.262453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71370 ] 00:15:37.497 [2024-11-27 11:59:27.444636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.756 [2024-11-27 11:59:27.578691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.014 Running I/O for 5 seconds... 00:15:40.328 24768.00 IOPS, 96.75 MiB/s [2024-11-27T11:59:31.314Z] 23360.00 IOPS, 91.25 MiB/s [2024-11-27T11:59:32.250Z] 24298.67 IOPS, 94.92 MiB/s [2024-11-27T11:59:33.187Z] 24704.00 IOPS, 96.50 MiB/s 00:15:43.134 Latency(us) 00:15:43.134 [2024-11-27T11:59:33.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.134 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:43.134 xnvme_bdev : 5.01 24096.28 94.13 0.00 0.00 2647.73 1552.86 8106.46 00:15:43.134 [2024-11-27T11:59:33.187Z] =================================================================================================================== 00:15:43.134 [2024-11-27T11:59:33.187Z] Total : 24096.28 94.13 0.00 0.00 2647.73 1552.86 8106.46 00:15:44.579 11:59:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.580 11:59:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:44.580 11:59:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:44.580 11:59:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:44.580 11:59:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.580 { 00:15:44.580 "subsystems": [ 00:15:44.580 { 00:15:44.580 "subsystem": "bdev", 00:15:44.580 "config": [ 00:15:44.580 { 00:15:44.580 "params": { 00:15:44.580 "io_mechanism": "io_uring", 00:15:44.580 "conserve_cpu": false, 00:15:44.580 "filename": "/dev/nvme0n1", 00:15:44.580 "name": "xnvme_bdev" 00:15:44.580 }, 00:15:44.580 "method": "bdev_xnvme_create" 00:15:44.580 }, 00:15:44.580 { 00:15:44.580 "method": "bdev_wait_for_examine" 00:15:44.580 } 00:15:44.580 ] 00:15:44.580 } 00:15:44.580 ] 00:15:44.580 } 00:15:44.580 [2024-11-27 11:59:34.281829] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:44.580 [2024-11-27 11:59:34.281936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71452 ] 00:15:44.580 [2024-11-27 11:59:34.460900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.580 [2024-11-27 11:59:34.590128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.148 Running I/O for 5 seconds... 00:15:47.026 22336.00 IOPS, 87.25 MiB/s [2024-11-27T11:59:38.019Z] 22016.00 IOPS, 86.00 MiB/s [2024-11-27T11:59:39.410Z] 22592.00 IOPS, 88.25 MiB/s [2024-11-27T11:59:40.351Z] 22800.00 IOPS, 89.06 MiB/s 00:15:50.298 Latency(us) 00:15:50.298 [2024-11-27T11:59:40.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.298 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:50.298 xnvme_bdev : 5.00 24111.66 94.19 0.00 0.00 2646.40 1375.20 8474.94 00:15:50.298 [2024-11-27T11:59:40.351Z] =================================================================================================================== 00:15:50.298 [2024-11-27T11:59:40.351Z] Total : 24111.66 94.19 0.00 0.00 2646.40 1375.20 8474.94 00:15:51.235 00:15:51.235 real 0m14.028s 00:15:51.235 user 0m7.035s 00:15:51.235 sys 0m6.725s 00:15:51.235 11:59:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.235 ************************************ 00:15:51.236 END TEST xnvme_bdevperf 00:15:51.236 ************************************ 00:15:51.236 11:59:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:51.236 11:59:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:51.236 11:59:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:51.236 11:59:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.236 11:59:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.236 ************************************ 00:15:51.236 START TEST xnvme_fio_plugin 00:15:51.236 ************************************ 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.236 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.495 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.495 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.495 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:51.495 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.495 11:59:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.495 { 00:15:51.495 "subsystems": [ 00:15:51.495 { 00:15:51.495 "subsystem": "bdev", 00:15:51.495 "config": [ 00:15:51.495 { 00:15:51.495 "params": { 00:15:51.495 "io_mechanism": "io_uring", 00:15:51.495 "conserve_cpu": false, 00:15:51.495 "filename": "/dev/nvme0n1", 00:15:51.495 "name": "xnvme_bdev" 00:15:51.495 }, 00:15:51.495 "method": "bdev_xnvme_create" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "method": "bdev_wait_for_examine" 00:15:51.496 } 00:15:51.496 ] 00:15:51.496 } 00:15:51.496 ] 00:15:51.496 } 00:15:51.496 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:51.496 fio-3.35 00:15:51.496 Starting 1 thread 00:15:58.071 00:15:58.071 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71577: Wed Nov 27 11:59:47 2024 00:15:58.071 read: IOPS=26.0k, BW=101MiB/s (106MB/s)(507MiB/5001msec) 00:15:58.071 slat (nsec): min=2475, max=84513, avg=6112.11, stdev=3127.69 00:15:58.071 clat (usec): min=1301, max=6910, avg=2219.65, stdev=360.62 00:15:58.071 lat (usec): min=1304, max=6920, avg=2225.76, stdev=362.26 00:15:58.071 clat percentiles (usec): 00:15:58.071 | 1.00th=[ 1483], 5.00th=[ 1663], 10.00th=[ 1762], 20.00th=[ 1893], 00:15:58.071 | 30.00th=[ 2008], 40.00th=[ 2114], 50.00th=[ 2212], 60.00th=[ 2311], 00:15:58.071 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 2769], 00:15:58.071 | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3195], 99.95th=[ 3490], 00:15:58.071 | 99.99th=[ 6783] 00:15:58.071 bw ( KiB/s): min=91648, max=114688, per=100.00%, avg=103936.00, stdev=8111.61, samples=9 00:15:58.071 iops : min=22912, max=28672, avg=25984.00, stdev=2027.90, samples=9 00:15:58.071 lat (msec) : 2=29.59%, 4=70.36%, 10=0.05% 00:15:58.071 cpu : usr=32.56%, sys=66.20%, ctx=11, majf=0, minf=762 00:15:58.071 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:58.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.071 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:58.071 issued rwts: total=129792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.071 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:58.071 00:15:58.071 Run status group 0 (all jobs): 00:15:58.071 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=507MiB (532MB), run=5001-5001msec 00:15:59.011 ----------------------------------------------------- 00:15:59.011 Suppressions used: 00:15:59.011 count bytes template 00:15:59.011 1 11 /usr/src/fio/parse.c 00:15:59.011 1 8 libtcmalloc_minimal.so 00:15:59.011 1 904 libcrypto.so 00:15:59.011 ----------------------------------------------------- 00:15:59.011 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:59.011 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:59.012 { 00:15:59.012 "subsystems": [ 00:15:59.012 { 00:15:59.012 "subsystem": "bdev", 00:15:59.012 "config": [ 00:15:59.012 { 00:15:59.012 "params": { 00:15:59.012 "io_mechanism": "io_uring", 00:15:59.012 "conserve_cpu": false, 00:15:59.012 "filename": "/dev/nvme0n1", 00:15:59.012 "name": "xnvme_bdev" 00:15:59.012 }, 00:15:59.012 "method": "bdev_xnvme_create" 00:15:59.012 }, 00:15:59.012 { 00:15:59.012 "method": "bdev_wait_for_examine" 00:15:59.012 } 00:15:59.012 ] 00:15:59.012 } 00:15:59.012 ] 00:15:59.012 } 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:59.012 11:59:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:59.012 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:59.012 fio-3.35 00:15:59.012 Starting 1 thread 00:16:05.585 00:16:05.585 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71678: Wed Nov 27 11:59:54 2024 00:16:05.585 write: IOPS=27.0k, BW=106MiB/s (111MB/s)(528MiB/5002msec); 0 zone resets 00:16:05.585 slat (usec): min=2, max=1897, avg= 6.09, stdev= 6.06 00:16:05.585 clat (usec): min=1310, max=5563, avg=2125.61, stdev=386.68 00:16:05.585 lat (usec): min=1314, max=5572, avg=2131.69, stdev=388.44 00:16:05.585 clat percentiles (usec): 00:16:05.585 | 1.00th=[ 1483], 5.00th=[ 1582], 10.00th=[ 1647], 20.00th=[ 1762], 00:16:05.585 | 30.00th=[ 1860], 40.00th=[ 1975], 50.00th=[ 2089], 60.00th=[ 2212], 00:16:05.585 | 70.00th=[ 2343], 80.00th=[ 2474], 90.00th=[ 2671], 95.00th=[ 2802], 00:16:05.585 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3359], 99.95th=[ 4752], 00:16:05.585 | 99.99th=[ 5473] 00:16:05.585 bw ( KiB/s): min=95232, max=117760, per=100.00%, avg=108145.78, stdev=7691.84, samples=9 00:16:05.585 iops : min=23808, max=29440, avg=27036.44, stdev=1922.96, samples=9 00:16:05.585 lat (msec) : 2=42.89%, 4=57.02%, 10=0.09% 00:16:05.585 cpu : usr=32.67%, sys=66.03%, ctx=17, majf=0, minf=762 00:16:05.585 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:05.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.585 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:05.585 issued rwts: total=0,135232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:05.585 00:16:05.585 Run status group 0 (all jobs): 00:16:05.585 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=528MiB (554MB), run=5002-5002msec 00:16:06.524 ----------------------------------------------------- 00:16:06.524 Suppressions used: 00:16:06.524 count bytes template 00:16:06.524 1 11 /usr/src/fio/parse.c 00:16:06.524 1 8 libtcmalloc_minimal.so 00:16:06.524 1 904 libcrypto.so 00:16:06.524 ----------------------------------------------------- 00:16:06.524 00:16:06.524 00:16:06.524 real 0m15.053s 00:16:06.524 user 0m7.245s 00:16:06.524 sys 0m7.411s 00:16:06.524 11:59:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.524 11:59:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:06.524 ************************************ 00:16:06.524 END TEST xnvme_fio_plugin 00:16:06.524 ************************************ 00:16:06.524 11:59:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:06.524 11:59:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:06.524 11:59:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:06.524 11:59:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:06.524 11:59:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.524 11:59:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.524 11:59:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.524 ************************************ 00:16:06.524 START TEST xnvme_rpc 00:16:06.524 ************************************ 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71770 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71770 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71770 ']' 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.524 11:59:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.524 [2024-11-27 11:59:56.520729] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:06.524 [2024-11-27 11:59:56.521090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71770 ] 00:16:06.783 [2024-11-27 11:59:56.693216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.783 [2024-11-27 11:59:56.824116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.722 xnvme_bdev 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.722 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.982 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71770 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71770 ']' 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71770 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.983 11:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71770 00:16:07.983 killing process with pid 71770 00:16:07.983 11:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.983 11:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.983 11:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71770' 00:16:07.983 11:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71770 00:16:07.983 11:59:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71770 00:16:10.521 00:16:10.521 real 0m3.907s 00:16:10.521 user 0m3.836s 00:16:10.521 sys 0m0.682s 00:16:10.521 ************************************ 00:16:10.521 END TEST xnvme_rpc 00:16:10.521 ************************************ 00:16:10.521 12:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.521 12:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.521 12:00:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:10.521 12:00:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.521 12:00:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.521 12:00:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.521 ************************************ 00:16:10.521 START TEST xnvme_bdevperf 00:16:10.521 ************************************ 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:10.521 12:00:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:10.521 { 00:16:10.521 "subsystems": [ 00:16:10.521 { 00:16:10.521 "subsystem": "bdev", 00:16:10.521 "config": [ 00:16:10.521 { 00:16:10.521 "params": { 00:16:10.521 "io_mechanism": "io_uring", 00:16:10.521 "conserve_cpu": true, 00:16:10.521 "filename": "/dev/nvme0n1", 00:16:10.521 "name": "xnvme_bdev" 00:16:10.521 }, 00:16:10.521 "method": "bdev_xnvme_create" 00:16:10.521 }, 00:16:10.521 { 00:16:10.521 "method": "bdev_wait_for_examine" 00:16:10.521 } 00:16:10.521 ] 00:16:10.521 } 00:16:10.521 ] 00:16:10.521 } 00:16:10.521 [2024-11-27 12:00:00.478862] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:10.521 [2024-11-27 12:00:00.479103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ] 00:16:10.781 [2024-11-27 12:00:00.659711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.781 [2024-11-27 12:00:00.773575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.347 Running I/O for 5 seconds... 00:16:13.217 32064.00 IOPS, 125.25 MiB/s [2024-11-27T12:00:04.202Z] 36638.50 IOPS, 143.12 MiB/s [2024-11-27T12:00:05.165Z] 43717.33 IOPS, 170.77 MiB/s [2024-11-27T12:00:06.537Z] 47375.50 IOPS, 185.06 MiB/s 00:16:16.484 Latency(us) 00:16:16.484 [2024-11-27T12:00:06.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.484 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:16.484 xnvme_bdev : 5.00 50668.36 197.92 0.00 0.00 1259.65 131.60 6527.28 00:16:16.484 [2024-11-27T12:00:06.537Z] =================================================================================================================== 00:16:16.484 [2024-11-27T12:00:06.537Z] Total : 50668.36 197.92 0.00 0.00 1259.65 131.60 6527.28 00:16:17.422 12:00:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:17.422 12:00:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:17.422 12:00:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:17.422 12:00:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:17.422 12:00:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:17.422 { 00:16:17.422 "subsystems": [ 00:16:17.422 { 00:16:17.422 "subsystem": "bdev", 00:16:17.422 "config": [ 00:16:17.422 { 00:16:17.422 "params": { 00:16:17.422 "io_mechanism": "io_uring", 00:16:17.422 "conserve_cpu": true, 00:16:17.422 "filename": "/dev/nvme0n1", 00:16:17.422 "name": "xnvme_bdev" 00:16:17.422 }, 00:16:17.422 "method": "bdev_xnvme_create" 00:16:17.422 }, 00:16:17.422 { 00:16:17.422 "method": "bdev_wait_for_examine" 00:16:17.422 } 00:16:17.422 ] 00:16:17.422 } 00:16:17.422 ] 00:16:17.422 } 00:16:17.422 [2024-11-27 12:00:07.280901] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:17.422 [2024-11-27 12:00:07.281029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71930 ] 00:16:17.422 [2024-11-27 12:00:07.463462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.682 [2024-11-27 12:00:07.572799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.942 Running I/O for 5 seconds... 00:16:20.262 23449.00 IOPS, 91.60 MiB/s [2024-11-27T12:00:11.254Z] 22924.50 IOPS, 89.55 MiB/s [2024-11-27T12:00:12.193Z] 22813.67 IOPS, 89.12 MiB/s [2024-11-27T12:00:13.132Z] 22718.00 IOPS, 88.74 MiB/s 00:16:23.079 Latency(us) 00:16:23.079 [2024-11-27T12:00:13.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.079 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:23.079 xnvme_bdev : 5.00 23864.07 93.22 0.00 0.00 2673.27 819.20 8422.30 00:16:23.079 [2024-11-27T12:00:13.132Z] =================================================================================================================== 00:16:23.079 [2024-11-27T12:00:13.132Z] Total : 23864.07 93.22 0.00 0.00 2673.27 819.20 8422.30 00:16:24.022 00:16:24.022 real 0m13.611s 00:16:24.022 user 0m7.463s 00:16:24.022 sys 0m5.597s 00:16:24.022 12:00:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.022 12:00:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 ************************************ 00:16:24.022 END TEST xnvme_bdevperf 00:16:24.022 ************************************ 00:16:24.022 12:00:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:24.022 12:00:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.022 12:00:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.022 12:00:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 ************************************ 00:16:24.022 START TEST xnvme_fio_plugin 00:16:24.022 ************************************ 00:16:24.022 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:24.022 12:00:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:24.022 12:00:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:24.022 12:00:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:24.282 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:24.283 { 00:16:24.283 "subsystems": [ 00:16:24.283 { 00:16:24.283 "subsystem": "bdev", 00:16:24.283 "config": [ 00:16:24.283 { 00:16:24.283 "params": { 00:16:24.283 "io_mechanism": "io_uring", 00:16:24.283 "conserve_cpu": true, 00:16:24.283 "filename": "/dev/nvme0n1", 00:16:24.283 "name": "xnvme_bdev" 00:16:24.283 }, 00:16:24.283 "method": "bdev_xnvme_create" 00:16:24.283 }, 00:16:24.283 { 00:16:24.283 "method": "bdev_wait_for_examine" 00:16:24.283 } 00:16:24.283 ] 00:16:24.283 } 00:16:24.283 ] 00:16:24.283 } 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:24.283 12:00:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.283 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:24.283 fio-3.35 00:16:24.283 Starting 1 thread 00:16:30.836 00:16:30.836 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72050: Wed Nov 27 12:00:20 2024 00:16:30.836 read: IOPS=58.4k, BW=228MiB/s (239MB/s)(1140MiB/5001msec) 00:16:30.836 slat (nsec): min=2224, max=38886, avg=2912.18, stdev=987.53 00:16:30.836 clat (usec): min=349, max=3622, avg=979.38, stdev=195.78 00:16:30.836 lat (usec): min=352, max=3626, avg=982.29, stdev=196.00 00:16:30.836 clat percentiles (usec): 00:16:30.836 | 1.00th=[ 717], 5.00th=[ 766], 10.00th=[ 807], 20.00th=[ 848], 00:16:30.836 | 30.00th=[ 881], 40.00th=[ 906], 50.00th=[ 938], 60.00th=[ 971], 00:16:30.836 | 70.00th=[ 1004], 80.00th=[ 1057], 90.00th=[ 1188], 95.00th=[ 1418], 00:16:30.836 | 99.00th=[ 1696], 99.50th=[ 1778], 99.90th=[ 2147], 99.95th=[ 2245], 00:16:30.837 | 99.99th=[ 2638] 00:16:30.837 bw ( KiB/s): min=193024, max=262136, per=100.00%, avg=234360.89, stdev=21888.66, samples=9 00:16:30.837 iops : min=48256, max=65534, avg=58590.22, stdev=5472.16, samples=9 00:16:30.837 lat (usec) : 500=0.01%, 750=3.01%, 1000=66.20% 00:16:30.837 lat (msec) : 2=30.59%, 4=0.19% 00:16:30.837 cpu : usr=44.30%, sys=52.30%, ctx=12, majf=0, minf=762 00:16:30.837 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:30.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.837 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:30.837 issued rwts: total=291880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:30.837 00:16:30.837 Run status group 0 (all jobs): 00:16:30.837 READ: bw=228MiB/s (239MB/s), 228MiB/s-228MiB/s (239MB/s-239MB/s), io=1140MiB (1196MB), run=5001-5001msec 00:16:31.405 ----------------------------------------------------- 00:16:31.405 Suppressions used: 00:16:31.405 count bytes template 00:16:31.405 1 11 /usr/src/fio/parse.c 00:16:31.405 1 8 libtcmalloc_minimal.so 00:16:31.405 1 904 libcrypto.so 00:16:31.405 ----------------------------------------------------- 00:16:31.405 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.405 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:31.664 { 00:16:31.664 "subsystems": [ 00:16:31.664 { 00:16:31.664 "subsystem": "bdev", 00:16:31.664 "config": [ 00:16:31.664 { 00:16:31.664 "params": { 00:16:31.664 "io_mechanism": "io_uring", 00:16:31.664 "conserve_cpu": true, 00:16:31.664 "filename": "/dev/nvme0n1", 00:16:31.664 "name": "xnvme_bdev" 00:16:31.664 }, 00:16:31.664 "method": "bdev_xnvme_create" 00:16:31.664 }, 00:16:31.664 { 00:16:31.664 "method": "bdev_wait_for_examine" 00:16:31.664 } 00:16:31.664 ] 00:16:31.664 } 00:16:31.664 ] 00:16:31.664 } 00:16:31.664 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.664 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.664 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:31.664 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.664 12:00:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.664 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:31.664 fio-3.35 00:16:31.664 Starting 1 thread 00:16:38.238 00:16:38.238 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72147: Wed Nov 27 12:00:27 2024 00:16:38.238 write: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(442MiB/5001msec); 0 zone resets 00:16:38.238 slat (usec): min=4, max=110, avg= 8.48, stdev= 3.37 00:16:38.238 clat (usec): min=783, max=4880, avg=2489.12, stdev=230.61 00:16:38.238 lat (usec): min=791, max=4907, avg=2497.59, stdev=231.20 00:16:38.238 clat percentiles (usec): 00:16:38.238 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2278], 00:16:38.238 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:16:38.238 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2835], 00:16:38.238 | 99.00th=[ 2933], 99.50th=[ 2966], 99.90th=[ 3752], 99.95th=[ 4359], 00:16:38.238 | 99.99th=[ 4752] 00:16:38.238 bw ( KiB/s): min=87377, max=94720, per=99.89%, avg=90320.11, stdev=2839.45, samples=9 00:16:38.238 iops : min=21844, max=23680, avg=22580.00, stdev=709.90, samples=9 00:16:38.238 lat (usec) : 1000=0.02% 00:16:38.238 lat (msec) : 2=1.65%, 4=98.26%, 10=0.08% 00:16:38.238 cpu : usr=39.70%, sys=55.42%, ctx=20, majf=0, minf=762 00:16:38.238 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:38.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.238 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:38.238 issued rwts: total=0,113046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.238 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:38.238 00:16:38.238 Run status group 0 (all jobs): 00:16:38.238 WRITE: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=442MiB (463MB), run=5001-5001msec 00:16:38.807 ----------------------------------------------------- 00:16:38.808 Suppressions used: 00:16:38.808 count bytes template 00:16:38.808 1 11 /usr/src/fio/parse.c 00:16:38.808 1 8 libtcmalloc_minimal.so 00:16:38.808 1 904 libcrypto.so 00:16:38.808 ----------------------------------------------------- 00:16:38.808 00:16:38.808 ************************************ 00:16:38.808 END TEST xnvme_fio_plugin 00:16:38.808 ************************************ 00:16:38.808 00:16:38.808 real 0m14.644s 00:16:38.808 user 0m7.861s 00:16:38.808 sys 0m6.069s 00:16:38.808 12:00:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.808 12:00:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:38.808 12:00:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:38.808 12:00:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.808 12:00:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.808 12:00:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.808 ************************************ 00:16:38.808 START TEST xnvme_rpc 00:16:38.808 ************************************ 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72233 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72233 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72233 ']' 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.808 12:00:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.067 [2024-11-27 12:00:28.905693] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:39.067 [2024-11-27 12:00:28.906004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72233 ] 00:16:39.067 [2024-11-27 12:00:29.087056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.325 [2024-11-27 12:00:29.195220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 xnvme_bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72233 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72233 ']' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72233 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72233 00:16:40.263 killing process with pid 72233 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72233' 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72233 00:16:40.263 12:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72233 00:16:42.799 ************************************ 00:16:42.799 END TEST xnvme_rpc 00:16:42.799 ************************************ 00:16:42.799 00:16:42.799 real 0m3.786s 00:16:42.799 user 0m3.882s 00:16:42.799 sys 0m0.518s 00:16:42.799 12:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.799 12:00:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.799 12:00:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:42.799 12:00:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.799 12:00:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.799 12:00:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.799 ************************************ 00:16:42.799 START TEST xnvme_bdevperf 00:16:42.799 ************************************ 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:42.799 12:00:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:42.799 { 00:16:42.799 "subsystems": [ 00:16:42.799 { 00:16:42.799 "subsystem": "bdev", 00:16:42.799 "config": [ 00:16:42.799 { 00:16:42.799 "params": { 00:16:42.799 "io_mechanism": "io_uring_cmd", 00:16:42.799 "conserve_cpu": false, 00:16:42.799 "filename": "/dev/ng0n1", 00:16:42.799 "name": "xnvme_bdev" 00:16:42.799 }, 00:16:42.799 "method": "bdev_xnvme_create" 00:16:42.799 }, 00:16:42.799 { 00:16:42.799 "method": "bdev_wait_for_examine" 00:16:42.799 } 00:16:42.799 ] 00:16:42.799 } 00:16:42.799 ] 00:16:42.799 } 00:16:42.799 [2024-11-27 12:00:32.752264] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:42.799 [2024-11-27 12:00:32.752539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72313 ] 00:16:43.057 [2024-11-27 12:00:32.933805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.057 [2024-11-27 12:00:33.046377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.622 Running I/O for 5 seconds... 00:16:45.509 70388.00 IOPS, 274.95 MiB/s [2024-11-27T12:00:36.495Z] 67704.50 IOPS, 264.47 MiB/s [2024-11-27T12:00:37.428Z] 61307.00 IOPS, 239.48 MiB/s [2024-11-27T12:00:38.800Z] 55724.25 IOPS, 217.67 MiB/s 00:16:48.747 Latency(us) 00:16:48.747 [2024-11-27T12:00:38.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.747 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:48.747 xnvme_bdev : 5.00 50043.73 195.48 0.00 0.00 1274.09 227.01 6448.32 00:16:48.747 [2024-11-27T12:00:38.800Z] =================================================================================================================== 00:16:48.747 [2024-11-27T12:00:38.800Z] Total : 50043.73 195.48 0.00 0.00 1274.09 227.01 6448.32 00:16:49.684 12:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:49.684 12:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:49.684 12:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:49.684 12:00:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:49.684 12:00:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:49.684 { 00:16:49.684 "subsystems": [ 00:16:49.684 { 00:16:49.684 "subsystem": "bdev", 00:16:49.684 "config": [ 00:16:49.684 { 00:16:49.684 "params": { 00:16:49.684 "io_mechanism": "io_uring_cmd", 00:16:49.684 "conserve_cpu": false, 00:16:49.684 "filename": "/dev/ng0n1", 00:16:49.684 "name": "xnvme_bdev" 00:16:49.684 }, 00:16:49.684 "method": "bdev_xnvme_create" 00:16:49.684 }, 00:16:49.684 { 00:16:49.684 "method": "bdev_wait_for_examine" 00:16:49.684 } 00:16:49.684 ] 00:16:49.684 } 00:16:49.684 ] 00:16:49.684 } 00:16:49.684 [2024-11-27 12:00:39.558533] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:49.684 [2024-11-27 12:00:39.558638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:16:50.078 [2024-11-27 12:00:39.736779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.078 [2024-11-27 12:00:39.839040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.384 Running I/O for 5 seconds... 00:16:52.279 24192.00 IOPS, 94.50 MiB/s [2024-11-27T12:00:43.270Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-27T12:00:44.210Z] 23722.67 IOPS, 92.67 MiB/s [2024-11-27T12:00:45.592Z] 25176.00 IOPS, 98.34 MiB/s [2024-11-27T12:00:45.592Z] 25094.40 IOPS, 98.03 MiB/s 00:16:55.539 Latency(us) 00:16:55.539 [2024-11-27T12:00:45.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.539 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:55.539 xnvme_bdev : 5.01 25057.34 97.88 0.00 0.00 2545.72 861.97 7632.71 00:16:55.539 [2024-11-27T12:00:45.592Z] =================================================================================================================== 00:16:55.539 [2024-11-27T12:00:45.592Z] Total : 25057.34 97.88 0.00 0.00 2545.72 861.97 7632.71 00:16:56.479 12:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:56.479 12:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:56.479 12:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:56.479 12:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:56.479 12:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:56.479 { 00:16:56.479 "subsystems": [ 00:16:56.479 { 00:16:56.479 "subsystem": "bdev", 00:16:56.479 "config": [ 00:16:56.479 { 00:16:56.479 "params": { 00:16:56.479 "io_mechanism": "io_uring_cmd", 00:16:56.479 "conserve_cpu": false, 00:16:56.479 "filename": "/dev/ng0n1", 00:16:56.479 "name": "xnvme_bdev" 00:16:56.479 }, 00:16:56.479 "method": "bdev_xnvme_create" 00:16:56.479 }, 00:16:56.479 { 00:16:56.479 "method": "bdev_wait_for_examine" 00:16:56.479 } 00:16:56.479 ] 00:16:56.479 } 00:16:56.479 ] 00:16:56.479 } 00:16:56.479 [2024-11-27 12:00:46.376250] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:56.479 [2024-11-27 12:00:46.376377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72467 ] 00:16:56.738 [2024-11-27 12:00:46.557452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.738 [2024-11-27 12:00:46.664167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.997 Running I/O for 5 seconds... 00:16:58.948 72768.00 IOPS, 284.25 MiB/s [2024-11-27T12:00:50.394Z] 72768.00 IOPS, 284.25 MiB/s [2024-11-27T12:00:51.330Z] 72000.00 IOPS, 281.25 MiB/s [2024-11-27T12:00:52.268Z] 70720.00 IOPS, 276.25 MiB/s 00:17:02.215 Latency(us) 00:17:02.215 [2024-11-27T12:00:52.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.215 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:02.215 xnvme_bdev : 5.00 71125.35 277.83 0.00 0.00 897.09 598.77 5711.37 00:17:02.215 [2024-11-27T12:00:52.268Z] =================================================================================================================== 00:17:02.215 [2024-11-27T12:00:52.268Z] Total : 71125.35 277.83 0.00 0.00 897.09 598.77 5711.37 00:17:03.153 12:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:03.153 12:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:03.153 12:00:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:03.153 12:00:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:03.153 12:00:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:03.153 { 00:17:03.153 "subsystems": [ 00:17:03.153 { 00:17:03.153 "subsystem": "bdev", 00:17:03.153 "config": [ 00:17:03.153 { 00:17:03.153 "params": { 00:17:03.153 "io_mechanism": "io_uring_cmd", 00:17:03.153 "conserve_cpu": false, 00:17:03.153 "filename": "/dev/ng0n1", 00:17:03.153 "name": "xnvme_bdev" 00:17:03.153 }, 00:17:03.153 "method": "bdev_xnvme_create" 00:17:03.153 }, 00:17:03.153 { 00:17:03.153 "method": "bdev_wait_for_examine" 00:17:03.153 } 00:17:03.153 ] 00:17:03.153 } 00:17:03.153 ] 00:17:03.153 } 00:17:03.153 [2024-11-27 12:00:53.157707] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:03.153 [2024-11-27 12:00:53.157833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72550 ] 00:17:03.412 [2024-11-27 12:00:53.338458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.412 [2024-11-27 12:00:53.443487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.979 Running I/O for 5 seconds... 00:17:05.843 56633.00 IOPS, 221.22 MiB/s [2024-11-27T12:00:56.829Z] 63379.50 IOPS, 247.58 MiB/s [2024-11-27T12:00:57.764Z] 62838.33 IOPS, 245.46 MiB/s [2024-11-27T12:00:59.138Z] 60221.25 IOPS, 235.24 MiB/s [2024-11-27T12:00:59.138Z] 59050.60 IOPS, 230.67 MiB/s 00:17:09.085 Latency(us) 00:17:09.085 [2024-11-27T12:00:59.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.085 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:09.085 xnvme_bdev : 5.00 59033.12 230.60 0.00 0.00 1081.06 73.61 16318.20 00:17:09.085 [2024-11-27T12:00:59.138Z] =================================================================================================================== 00:17:09.085 [2024-11-27T12:00:59.138Z] Total : 59033.12 230.60 0.00 0.00 1081.06 73.61 16318.20 00:17:10.022 00:17:10.022 real 0m27.287s 00:17:10.022 user 0m14.129s 00:17:10.022 sys 0m12.751s 00:17:10.022 ************************************ 00:17:10.022 END TEST xnvme_bdevperf 00:17:10.022 ************************************ 00:17:10.022 12:00:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.022 12:00:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:10.022 12:01:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:10.022 12:01:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.022 12:01:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.022 12:01:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:10.022 ************************************ 00:17:10.022 START TEST xnvme_fio_plugin 00:17:10.022 ************************************ 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:10.022 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:10.282 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:10.282 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:10.282 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:10.282 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:10.282 12:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:10.282 { 00:17:10.282 "subsystems": [ 00:17:10.282 { 00:17:10.282 "subsystem": "bdev", 00:17:10.282 "config": [ 00:17:10.282 { 00:17:10.282 "params": { 00:17:10.282 "io_mechanism": "io_uring_cmd", 00:17:10.282 "conserve_cpu": false, 00:17:10.282 "filename": "/dev/ng0n1", 00:17:10.282 "name": "xnvme_bdev" 00:17:10.282 }, 00:17:10.282 "method": "bdev_xnvme_create" 00:17:10.282 }, 00:17:10.282 { 00:17:10.282 "method": "bdev_wait_for_examine" 00:17:10.282 } 00:17:10.282 ] 00:17:10.282 } 00:17:10.282 ] 00:17:10.282 } 00:17:10.282 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:10.282 fio-3.35 00:17:10.282 Starting 1 thread 00:17:16.858 00:17:16.858 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72669: Wed Nov 27 12:01:06 2024 00:17:16.858 read: IOPS=23.1k, BW=90.4MiB/s (94.8MB/s)(452MiB/5003msec) 00:17:16.858 slat (usec): min=2, max=287, avg= 8.13, stdev= 4.23 00:17:16.858 clat (usec): min=1058, max=4979, avg=2434.10, stdev=329.74 00:17:16.858 lat (usec): min=1061, max=4989, avg=2442.23, stdev=331.03 00:17:16.858 clat percentiles (usec): 00:17:16.858 | 1.00th=[ 1303], 5.00th=[ 1663], 10.00th=[ 2147], 20.00th=[ 2278], 00:17:16.858 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:17:16.858 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2769], 95.00th=[ 2802], 00:17:16.858 | 99.00th=[ 2966], 99.50th=[ 3130], 99.90th=[ 4113], 99.95th=[ 4359], 00:17:16.858 | 99.99th=[ 4817] 00:17:16.858 bw ( KiB/s): min=88320, max=105472, per=100.00%, avg=92964.89, stdev=6422.86, samples=9 00:17:16.858 iops : min=22080, max=26368, avg=23241.22, stdev=1605.71, samples=9 00:17:16.858 lat (msec) : 2=7.30%, 4=92.58%, 10=0.12% 00:17:16.858 cpu : usr=40.32%, sys=57.66%, ctx=9, majf=0, minf=762 00:17:16.858 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:17:16.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.858 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:16.858 issued rwts: total=115808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.858 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.858 00:17:16.858 Run status group 0 (all jobs): 00:17:16.858 READ: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=452MiB (474MB), run=5003-5003msec 00:17:17.428 ----------------------------------------------------- 00:17:17.428 Suppressions used: 00:17:17.428 count bytes template 00:17:17.428 1 11 /usr/src/fio/parse.c 00:17:17.428 1 8 libtcmalloc_minimal.so 00:17:17.428 1 904 libcrypto.so 00:17:17.428 ----------------------------------------------------- 00:17:17.428 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:17.428 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:17.429 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.429 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:17.688 { 00:17:17.688 "subsystems": [ 00:17:17.688 { 00:17:17.689 "subsystem": "bdev", 00:17:17.689 "config": [ 00:17:17.689 { 00:17:17.689 "params": { 00:17:17.689 "io_mechanism": "io_uring_cmd", 00:17:17.689 "conserve_cpu": false, 00:17:17.689 "filename": "/dev/ng0n1", 00:17:17.689 "name": "xnvme_bdev" 00:17:17.689 }, 00:17:17.689 "method": "bdev_xnvme_create" 00:17:17.689 }, 00:17:17.689 { 00:17:17.689 "method": "bdev_wait_for_examine" 00:17:17.689 } 00:17:17.689 ] 00:17:17.689 } 00:17:17.689 ] 00:17:17.689 } 00:17:17.689 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:17.689 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:17.689 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:17.689 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:17.689 12:01:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.689 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:17.689 fio-3.35 00:17:17.689 Starting 1 thread 00:17:24.265 00:17:24.265 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72768: Wed Nov 27 12:01:13 2024 00:17:24.265 write: IOPS=22.6k, BW=88.4MiB/s (92.7MB/s)(442MiB/5001msec); 0 zone resets 00:17:24.265 slat (usec): min=2, max=481, avg= 8.79, stdev= 5.06 00:17:24.265 clat (usec): min=1144, max=5840, avg=2474.16, stdev=282.40 00:17:24.265 lat (usec): min=1148, max=5869, avg=2482.95, stdev=283.38 00:17:24.265 clat percentiles (usec): 00:17:24.265 | 1.00th=[ 1434], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2278], 00:17:24.265 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:17:24.265 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2835], 00:17:24.265 | 99.00th=[ 2966], 99.50th=[ 3097], 99.90th=[ 4047], 99.95th=[ 5276], 00:17:24.265 | 99.99th=[ 5735] 00:17:24.265 bw ( KiB/s): min=87928, max=103728, per=100.00%, avg=90870.22, stdev=5122.19, samples=9 00:17:24.265 iops : min=21982, max=25932, avg=22717.56, stdev=1280.55, samples=9 00:17:24.265 lat (msec) : 2=3.73%, 4=96.17%, 10=0.11% 00:17:24.265 cpu : usr=41.54%, sys=56.32%, ctx=33, majf=0, minf=762 00:17:24.265 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:24.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.265 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:24.265 issued rwts: total=0,113199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:24.265 00:17:24.265 Run status group 0 (all jobs): 00:17:24.265 WRITE: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=442MiB (464MB), run=5001-5001msec 00:17:24.834 ----------------------------------------------------- 00:17:24.834 Suppressions used: 00:17:24.834 count bytes template 00:17:24.834 1 11 /usr/src/fio/parse.c 00:17:24.834 1 8 libtcmalloc_minimal.so 00:17:24.834 1 904 libcrypto.so 00:17:24.834 ----------------------------------------------------- 00:17:24.834 00:17:24.834 00:17:24.834 real 0m14.703s 00:17:24.834 user 0m7.914s 00:17:24.834 sys 0m6.330s 00:17:24.834 12:01:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.834 ************************************ 00:17:24.834 END TEST xnvme_fio_plugin 00:17:24.834 ************************************ 00:17:24.834 12:01:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:24.834 12:01:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:24.834 12:01:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:24.834 12:01:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:24.834 12:01:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:24.834 12:01:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:24.834 12:01:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.834 12:01:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:24.834 ************************************ 00:17:24.834 START TEST xnvme_rpc 00:17:24.834 ************************************ 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:24.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72858 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72858 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72858 ']' 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.834 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.093 [2024-11-27 12:01:14.911661] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:25.093 [2024-11-27 12:01:14.912008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72858 ] 00:17:25.093 [2024-11-27 12:01:15.091528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.352 [2024-11-27 12:01:15.193485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 xnvme_bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.290 12:01:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72858 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72858 ']' 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72858 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72858 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.291 killing process with pid 72858 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72858' 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72858 00:17:26.291 12:01:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72858 00:17:28.829 00:17:28.829 real 0m3.747s 00:17:28.829 user 0m3.793s 00:17:28.829 sys 0m0.537s 00:17:28.829 12:01:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.829 12:01:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.829 ************************************ 00:17:28.829 END TEST xnvme_rpc 00:17:28.829 ************************************ 00:17:28.829 12:01:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:28.829 12:01:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:28.829 12:01:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.829 12:01:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:28.829 ************************************ 00:17:28.829 START TEST xnvme_bdevperf 00:17:28.829 ************************************ 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:28.829 12:01:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:28.829 { 00:17:28.829 "subsystems": [ 00:17:28.829 { 00:17:28.829 "subsystem": "bdev", 00:17:28.829 "config": [ 00:17:28.829 { 00:17:28.829 "params": { 00:17:28.829 "io_mechanism": "io_uring_cmd", 00:17:28.829 "conserve_cpu": true, 00:17:28.830 "filename": "/dev/ng0n1", 00:17:28.830 "name": "xnvme_bdev" 00:17:28.830 }, 00:17:28.830 "method": "bdev_xnvme_create" 00:17:28.830 }, 00:17:28.830 { 00:17:28.830 "method": "bdev_wait_for_examine" 00:17:28.830 } 00:17:28.830 ] 00:17:28.830 } 00:17:28.830 ] 00:17:28.830 } 00:17:28.830 [2024-11-27 12:01:18.729801] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:28.830 [2024-11-27 12:01:18.729932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72933 ] 00:17:29.089 [2024-11-27 12:01:18.913835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.089 [2024-11-27 12:01:19.024510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.346 Running I/O for 5 seconds... 00:17:31.655 49536.00 IOPS, 193.50 MiB/s [2024-11-27T12:01:22.644Z] 49758.00 IOPS, 194.37 MiB/s [2024-11-27T12:01:23.578Z] 52862.67 IOPS, 206.49 MiB/s [2024-11-27T12:01:24.531Z] 52143.00 IOPS, 203.68 MiB/s [2024-11-27T12:01:24.531Z] 46783.20 IOPS, 182.75 MiB/s 00:17:34.478 Latency(us) 00:17:34.478 [2024-11-27T12:01:24.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.478 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:34.478 xnvme_bdev : 5.01 46700.30 182.42 0.00 0.00 1366.03 697.47 7948.54 00:17:34.478 [2024-11-27T12:01:24.531Z] =================================================================================================================== 00:17:34.478 [2024-11-27T12:01:24.531Z] Total : 46700.30 182.42 0.00 0.00 1366.03 697.47 7948.54 00:17:35.895 12:01:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:35.895 12:01:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:35.895 12:01:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:35.895 12:01:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:35.895 12:01:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 { 00:17:35.895 "subsystems": [ 00:17:35.895 { 00:17:35.895 "subsystem": "bdev", 00:17:35.895 "config": [ 00:17:35.895 { 00:17:35.895 "params": { 00:17:35.895 "io_mechanism": "io_uring_cmd", 00:17:35.895 "conserve_cpu": true, 00:17:35.895 "filename": "/dev/ng0n1", 00:17:35.895 "name": "xnvme_bdev" 00:17:35.895 }, 00:17:35.895 "method": "bdev_xnvme_create" 00:17:35.895 }, 00:17:35.895 { 00:17:35.895 "method": "bdev_wait_for_examine" 00:17:35.895 } 00:17:35.895 ] 00:17:35.895 } 00:17:35.895 ] 00:17:35.895 } 00:17:35.895 [2024-11-27 12:01:25.655805] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:35.895 [2024-11-27 12:01:25.655930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:17:35.895 [2024-11-27 12:01:25.833345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.154 [2024-11-27 12:01:25.963425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.414 Running I/O for 5 seconds... 00:17:38.736 24319.00 IOPS, 95.00 MiB/s [2024-11-27T12:01:29.726Z] 23103.50 IOPS, 90.25 MiB/s [2024-11-27T12:01:30.665Z] 22741.00 IOPS, 88.83 MiB/s [2024-11-27T12:01:31.602Z] 22575.75 IOPS, 88.19 MiB/s [2024-11-27T12:01:31.602Z] 22643.00 IOPS, 88.45 MiB/s 00:17:41.550 Latency(us) 00:17:41.550 [2024-11-27T12:01:31.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.550 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:41.550 xnvme_bdev : 5.01 22606.37 88.31 0.00 0.00 2821.64 842.23 8317.02 00:17:41.550 [2024-11-27T12:01:31.603Z] =================================================================================================================== 00:17:41.550 [2024-11-27T12:01:31.603Z] Total : 22606.37 88.31 0.00 0.00 2821.64 842.23 8317.02 00:17:42.486 12:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:42.486 12:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:42.486 12:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:42.486 12:01:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:42.486 12:01:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:42.486 { 00:17:42.486 "subsystems": [ 00:17:42.486 { 00:17:42.486 "subsystem": "bdev", 00:17:42.486 "config": [ 00:17:42.486 { 00:17:42.486 "params": { 00:17:42.486 "io_mechanism": "io_uring_cmd", 00:17:42.486 "conserve_cpu": true, 00:17:42.486 "filename": "/dev/ng0n1", 00:17:42.486 "name": "xnvme_bdev" 00:17:42.486 }, 00:17:42.486 "method": "bdev_xnvme_create" 00:17:42.486 }, 00:17:42.486 { 00:17:42.486 "method": "bdev_wait_for_examine" 00:17:42.486 } 00:17:42.486 ] 00:17:42.486 } 00:17:42.486 ] 00:17:42.486 } 00:17:42.745 [2024-11-27 12:01:32.550422] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:42.745 [2024-11-27 12:01:32.550550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:17:42.745 [2024-11-27 12:01:32.730333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.003 [2024-11-27 12:01:32.846392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.261 Running I/O for 5 seconds... 00:17:45.571 72256.00 IOPS, 282.25 MiB/s [2024-11-27T12:01:36.192Z] 72352.00 IOPS, 282.62 MiB/s [2024-11-27T12:01:37.570Z] 72469.33 IOPS, 283.08 MiB/s [2024-11-27T12:01:38.508Z] 72496.00 IOPS, 283.19 MiB/s 00:17:48.455 Latency(us) 00:17:48.455 [2024-11-27T12:01:38.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.455 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:48.455 xnvme_bdev : 5.00 72455.66 283.03 0.00 0.00 880.66 611.93 3553.16 00:17:48.455 [2024-11-27T12:01:38.508Z] =================================================================================================================== 00:17:48.455 [2024-11-27T12:01:38.508Z] Total : 72455.66 283.03 0.00 0.00 880.66 611.93 3553.16 00:17:49.391 12:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:49.391 12:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:49.391 12:01:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:49.391 12:01:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:49.391 12:01:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 { 00:17:49.391 "subsystems": [ 00:17:49.391 { 00:17:49.391 "subsystem": "bdev", 00:17:49.391 "config": [ 00:17:49.391 { 00:17:49.391 "params": { 00:17:49.391 "io_mechanism": "io_uring_cmd", 00:17:49.391 "conserve_cpu": true, 00:17:49.391 "filename": "/dev/ng0n1", 00:17:49.391 "name": "xnvme_bdev" 00:17:49.391 }, 00:17:49.391 "method": "bdev_xnvme_create" 00:17:49.391 }, 00:17:49.391 { 00:17:49.391 "method": "bdev_wait_for_examine" 00:17:49.391 } 00:17:49.391 ] 00:17:49.391 } 00:17:49.391 ] 00:17:49.391 } 00:17:49.391 [2024-11-27 12:01:39.347192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:49.391 [2024-11-27 12:01:39.347316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73167 ] 00:17:49.650 [2024-11-27 12:01:39.528847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.650 [2024-11-27 12:01:39.645612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.218 Running I/O for 5 seconds... 00:17:52.087 52057.00 IOPS, 203.35 MiB/s [2024-11-27T12:01:43.073Z] 49898.50 IOPS, 194.92 MiB/s [2024-11-27T12:01:44.008Z] 51030.33 IOPS, 199.34 MiB/s [2024-11-27T12:01:45.381Z] 51635.25 IOPS, 201.70 MiB/s 00:17:55.329 Latency(us) 00:17:55.329 [2024-11-27T12:01:45.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.329 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:55.329 xnvme_bdev : 5.00 52658.45 205.70 0.00 0.00 1210.44 94.18 15265.41 00:17:55.329 [2024-11-27T12:01:45.382Z] =================================================================================================================== 00:17:55.329 [2024-11-27T12:01:45.382Z] Total : 52658.45 205.70 0.00 0.00 1210.44 94.18 15265.41 00:17:56.276 00:17:56.276 real 0m27.527s 00:17:56.276 user 0m17.195s 00:17:56.276 sys 0m8.561s 00:17:56.276 12:01:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.276 12:01:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.276 ************************************ 00:17:56.276 END TEST xnvme_bdevperf 00:17:56.276 ************************************ 00:17:56.276 12:01:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:56.276 12:01:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.276 12:01:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.276 12:01:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.276 ************************************ 00:17:56.276 START TEST xnvme_fio_plugin 00:17:56.276 ************************************ 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:56.276 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:56.276 { 00:17:56.276 "subsystems": [ 00:17:56.276 { 00:17:56.276 "subsystem": "bdev", 00:17:56.276 "config": [ 00:17:56.276 { 00:17:56.276 "params": { 00:17:56.276 "io_mechanism": "io_uring_cmd", 00:17:56.276 "conserve_cpu": true, 00:17:56.276 "filename": "/dev/ng0n1", 00:17:56.277 "name": "xnvme_bdev" 00:17:56.277 }, 00:17:56.277 "method": "bdev_xnvme_create" 00:17:56.277 }, 00:17:56.277 { 00:17:56.277 "method": "bdev_wait_for_examine" 00:17:56.277 } 00:17:56.277 ] 00:17:56.277 } 00:17:56.277 ] 00:17:56.277 } 00:17:56.277 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:56.277 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:56.277 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:56.277 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:56.277 12:01:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:56.536 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:56.536 fio-3.35 00:17:56.536 Starting 1 thread 00:18:03.113 00:18:03.113 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73291: Wed Nov 27 12:01:52 2024 00:18:03.113 read: IOPS=25.9k, BW=101MiB/s (106MB/s)(507MiB/5001msec) 00:18:03.113 slat (usec): min=2, max=125, avg= 7.10, stdev= 3.71 00:18:03.113 clat (usec): min=1012, max=6185, avg=2181.49, stdev=466.18 00:18:03.113 lat (usec): min=1015, max=6213, avg=2188.59, stdev=468.10 00:18:03.113 clat percentiles (usec): 00:18:03.113 | 1.00th=[ 1123], 5.00th=[ 1270], 10.00th=[ 1418], 20.00th=[ 1729], 00:18:03.113 | 30.00th=[ 2040], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2376], 00:18:03.113 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2671], 95.00th=[ 2737], 00:18:03.113 | 99.00th=[ 2835], 99.50th=[ 2900], 99.90th=[ 3064], 99.95th=[ 3654], 00:18:03.113 | 99.99th=[ 6063] 00:18:03.113 bw ( KiB/s): min=89600, max=124928, per=100.00%, avg=105193.00, stdev=14167.66, samples=9 00:18:03.113 iops : min=22400, max=31232, avg=26298.22, stdev=3541.92, samples=9 00:18:03.113 lat (msec) : 2=28.56%, 4=71.39%, 10=0.05% 00:18:03.113 cpu : usr=46.98%, sys=49.34%, ctx=19, majf=0, minf=762 00:18:03.113 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:03.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.113 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:03.113 issued rwts: total=129696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:03.113 00:18:03.113 Run status group 0 (all jobs): 00:18:03.113 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=507MiB (531MB), run=5001-5001msec 00:18:03.683 ----------------------------------------------------- 00:18:03.683 Suppressions used: 00:18:03.683 count bytes template 00:18:03.683 1 11 /usr/src/fio/parse.c 00:18:03.683 1 8 libtcmalloc_minimal.so 00:18:03.683 1 904 libcrypto.so 00:18:03.683 ----------------------------------------------------- 00:18:03.683 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:03.683 12:01:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:03.683 { 00:18:03.683 "subsystems": [ 00:18:03.683 { 00:18:03.683 "subsystem": "bdev", 00:18:03.683 "config": [ 00:18:03.683 { 00:18:03.683 "params": { 00:18:03.683 "io_mechanism": "io_uring_cmd", 00:18:03.683 "conserve_cpu": true, 00:18:03.683 "filename": "/dev/ng0n1", 00:18:03.683 "name": "xnvme_bdev" 00:18:03.683 }, 00:18:03.683 "method": "bdev_xnvme_create" 00:18:03.683 }, 00:18:03.683 { 00:18:03.683 "method": "bdev_wait_for_examine" 00:18:03.683 } 00:18:03.683 ] 00:18:03.683 } 00:18:03.683 ] 00:18:03.683 } 00:18:03.943 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:03.943 fio-3.35 00:18:03.943 Starting 1 thread 00:18:10.543 00:18:10.543 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73387: Wed Nov 27 12:01:59 2024 00:18:10.543 write: IOPS=26.0k, BW=102MiB/s (107MB/s)(509MiB/5002msec); 0 zone resets 00:18:10.543 slat (usec): min=2, max=104, avg= 7.10, stdev= 3.85 00:18:10.543 clat (usec): min=938, max=7424, avg=2174.20, stdev=506.60 00:18:10.543 lat (usec): min=941, max=7452, avg=2181.30, stdev=508.72 00:18:10.543 clat percentiles (usec): 00:18:10.543 | 1.00th=[ 1090], 5.00th=[ 1221], 10.00th=[ 1352], 20.00th=[ 1647], 00:18:10.543 | 30.00th=[ 2024], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2409], 00:18:10.543 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:18:10.543 | 99.00th=[ 2900], 99.50th=[ 2999], 99.90th=[ 3523], 99.95th=[ 3654], 00:18:10.543 | 99.99th=[ 7308] 00:18:10.543 bw ( KiB/s): min=89088, max=136704, per=100.00%, avg=105678.56, stdev=20247.41, samples=9 00:18:10.543 iops : min=22272, max=34176, avg=26419.56, stdev=5061.91, samples=9 00:18:10.543 lat (usec) : 1000=0.13% 00:18:10.543 lat (msec) : 2=29.44%, 4=70.38%, 10=0.05% 00:18:10.543 cpu : usr=50.33%, sys=46.17%, ctx=10, majf=0, minf=762 00:18:10.543 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:10.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.543 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:10.543 issued rwts: total=0,130240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:10.543 00:18:10.543 Run status group 0 (all jobs): 00:18:10.543 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=509MiB (533MB), run=5002-5002msec 00:18:11.112 ----------------------------------------------------- 00:18:11.112 Suppressions used: 00:18:11.112 count bytes template 00:18:11.112 1 11 /usr/src/fio/parse.c 00:18:11.112 1 8 libtcmalloc_minimal.so 00:18:11.112 1 904 libcrypto.so 00:18:11.112 ----------------------------------------------------- 00:18:11.112 00:18:11.112 00:18:11.112 real 0m14.771s 00:18:11.112 user 0m8.656s 00:18:11.112 sys 0m5.510s 00:18:11.112 12:02:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.112 12:02:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:11.112 ************************************ 00:18:11.112 END TEST xnvme_fio_plugin 00:18:11.112 ************************************ 00:18:11.112 12:02:01 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72858 00:18:11.112 12:02:01 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72858 ']' 00:18:11.112 12:02:01 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72858 00:18:11.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72858) - No such process 00:18:11.112 Process with pid 72858 is not found 00:18:11.112 12:02:01 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72858 is not found' 00:18:11.112 00:18:11.112 real 3m52.313s 00:18:11.112 user 2m5.736s 00:18:11.112 sys 1m30.889s 00:18:11.112 12:02:01 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.112 12:02:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:11.112 ************************************ 00:18:11.112 END TEST nvme_xnvme 00:18:11.112 ************************************ 00:18:11.112 12:02:01 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:11.112 12:02:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.112 12:02:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.112 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:18:11.112 ************************************ 00:18:11.112 START TEST blockdev_xnvme 00:18:11.112 ************************************ 00:18:11.113 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:11.372 * Looking for test storage... 00:18:11.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:11.372 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.372 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.372 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.372 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:11.372 12:02:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.373 12:02:01 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.373 --rc genhtml_branch_coverage=1 00:18:11.373 --rc genhtml_function_coverage=1 00:18:11.373 --rc genhtml_legend=1 00:18:11.373 --rc geninfo_all_blocks=1 00:18:11.373 --rc geninfo_unexecuted_blocks=1 00:18:11.373 00:18:11.373 ' 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.373 --rc genhtml_branch_coverage=1 00:18:11.373 --rc genhtml_function_coverage=1 00:18:11.373 --rc genhtml_legend=1 00:18:11.373 --rc geninfo_all_blocks=1 00:18:11.373 --rc geninfo_unexecuted_blocks=1 00:18:11.373 00:18:11.373 ' 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.373 --rc genhtml_branch_coverage=1 00:18:11.373 --rc genhtml_function_coverage=1 00:18:11.373 --rc genhtml_legend=1 00:18:11.373 --rc geninfo_all_blocks=1 00:18:11.373 --rc geninfo_unexecuted_blocks=1 00:18:11.373 00:18:11.373 ' 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.373 --rc genhtml_branch_coverage=1 00:18:11.373 --rc genhtml_function_coverage=1 00:18:11.373 --rc genhtml_legend=1 00:18:11.373 --rc geninfo_all_blocks=1 00:18:11.373 --rc geninfo_unexecuted_blocks=1 00:18:11.373 00:18:11.373 ' 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73527 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:11.373 12:02:01 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73527 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73527 ']' 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.373 12:02:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:11.633 [2024-11-27 12:02:01.495258] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:11.633 [2024-11-27 12:02:01.495416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:18:11.633 [2024-11-27 12:02:01.678240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.892 [2024-11-27 12:02:01.785186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.831 12:02:02 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.831 12:02:02 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:12.831 12:02:02 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:12.831 12:02:02 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:18:12.831 12:02:02 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:12.831 12:02:02 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:12.831 12:02:02 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:13.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.966 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:13.966 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:13.966 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:14.225 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:14.225 12:02:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.225 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:14.226 nvme0n1 00:18:14.226 nvme0n2 00:18:14.226 nvme0n3 00:18:14.226 nvme1n1 00:18:14.226 nvme2n1 00:18:14.226 nvme3n1 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:14.226 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.226 12:02:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c648c267-4eb2-4348-914f-6d69ef62b881"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c648c267-4eb2-4348-914f-6d69ef62b881",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4523b46d-aaf3-43cf-b210-92fd4141b7f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4523b46d-aaf3-43cf-b210-92fd4141b7f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "31991416-aa7f-4ce9-a645-240d7d6fa99c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31991416-aa7f-4ce9-a645-240d7d6fa99c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "527ebfa3-9eeb-4dbd-8756-eddbaaa8640b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "527ebfa3-9eeb-4dbd-8756-eddbaaa8640b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36166fcf-c3dc-434a-9c94-97ae984cc90e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "36166fcf-c3dc-434a-9c94-97ae984cc90e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ca08fd55-9300-4267-bb6f-5853c8ebcbe3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ca08fd55-9300-4267-bb6f-5853c8ebcbe3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:14.486 12:02:04 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73527 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73527 ']' 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73527 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.486 12:02:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73527 00:18:14.487 killing process with pid 73527 00:18:14.487 12:02:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.487 12:02:04 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.487 12:02:04 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73527' 00:18:14.487 12:02:04 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73527 00:18:14.487 12:02:04 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73527 00:18:17.027 12:02:06 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:17.027 12:02:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:17.027 12:02:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:17.027 12:02:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.027 12:02:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:17.027 ************************************ 00:18:17.027 START TEST bdev_hello_world 00:18:17.027 ************************************ 00:18:17.027 12:02:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:17.027 [2024-11-27 12:02:06.762226] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:17.027 [2024-11-27 12:02:06.762338] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:18:17.027 [2024-11-27 12:02:06.942608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.027 [2024-11-27 12:02:07.053504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.597 [2024-11-27 12:02:07.481796] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:17.597 [2024-11-27 12:02:07.482108] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:17.597 [2024-11-27 12:02:07.482159] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:17.597 [2024-11-27 12:02:07.484588] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:17.597 [2024-11-27 12:02:07.484996] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:17.597 [2024-11-27 12:02:07.485024] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:17.597 [2024-11-27 12:02:07.485450] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:17.597 00:18:17.597 [2024-11-27 12:02:07.485502] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:18.534 00:18:18.534 ************************************ 00:18:18.534 END TEST bdev_hello_world 00:18:18.534 ************************************ 00:18:18.534 real 0m1.885s 00:18:18.534 user 0m1.534s 00:18:18.534 sys 0m0.232s 00:18:18.534 12:02:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.534 12:02:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:18.794 12:02:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:18.794 12:02:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.794 12:02:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.794 12:02:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:18.794 ************************************ 00:18:18.794 START TEST bdev_bounds 00:18:18.794 ************************************ 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:18.794 Process bdevio pid: 73859 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73859 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73859' 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73859 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73859 ']' 00:18:18.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.794 12:02:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:18.794 [2024-11-27 12:02:08.754385] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:18.794 [2024-11-27 12:02:08.754657] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73859 ] 00:18:19.053 [2024-11-27 12:02:08.934045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.053 [2024-11-27 12:02:09.048175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.053 [2024-11-27 12:02:09.048336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.053 [2024-11-27 12:02:09.048402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.628 12:02:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.628 12:02:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:19.628 12:02:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:19.628 I/O targets: 00:18:19.628 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:19.628 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:19.628 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:19.628 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:19.628 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:19.628 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:19.628 00:18:19.628 00:18:19.628 CUnit - A unit testing framework for C - Version 2.1-3 00:18:19.628 http://cunit.sourceforge.net/ 00:18:19.628 00:18:19.628 00:18:19.628 Suite: bdevio tests on: nvme3n1 00:18:19.628 Test: blockdev write read block ...passed 00:18:19.628 Test: blockdev write zeroes read block ...passed 00:18:19.887 Test: blockdev write zeroes read no split ...passed 00:18:19.887 Test: blockdev write zeroes read split ...passed 00:18:19.887 Test: blockdev write zeroes read split partial ...passed 00:18:19.887 Test: blockdev reset ...passed 00:18:19.887 Test: blockdev write read 8 blocks ...passed 00:18:19.887 Test: blockdev write read size > 128k ...passed 00:18:19.887 Test: blockdev write read invalid size ...passed 00:18:19.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:19.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:19.887 Test: blockdev write read max offset ...passed 00:18:19.887 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:19.887 Test: blockdev writev readv 8 blocks ...passed 00:18:19.887 Test: blockdev writev readv 30 x 1block ...passed 00:18:19.887 Test: blockdev writev readv block ...passed 00:18:19.887 Test: blockdev writev readv size > 128k ...passed 00:18:19.887 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:19.887 Test: blockdev comparev and writev ...passed 00:18:19.887 Test: blockdev nvme passthru rw ...passed 00:18:19.887 Test: blockdev nvme passthru vendor specific ...passed 00:18:19.887 Test: blockdev nvme admin passthru ...passed 00:18:19.887 Test: blockdev copy ...passed 00:18:19.887 Suite: bdevio tests on: nvme2n1 00:18:19.887 Test: blockdev write read block ...passed 00:18:19.887 Test: blockdev write zeroes read block ...passed 00:18:19.887 Test: blockdev write zeroes read no split ...passed 00:18:19.887 Test: blockdev write zeroes read split ...passed 00:18:19.887 Test: blockdev write zeroes read split partial ...passed 00:18:19.887 Test: blockdev reset ...passed 00:18:19.887 Test: blockdev write read 8 blocks ...passed 00:18:19.887 Test: blockdev write read size > 128k ...passed 00:18:19.887 Test: blockdev write read invalid size ...passed 00:18:19.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:19.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:19.887 Test: blockdev write read max offset ...passed 00:18:19.887 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:19.887 Test: blockdev writev readv 8 blocks ...passed 00:18:19.887 Test: blockdev writev readv 30 x 1block ...passed 00:18:19.887 Test: blockdev writev readv block ...passed 00:18:19.887 Test: blockdev writev readv size > 128k ...passed 00:18:19.887 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:19.887 Test: blockdev comparev and writev ...passed 00:18:19.887 Test: blockdev nvme passthru rw ...passed 00:18:19.887 Test: blockdev nvme passthru vendor specific ...passed 00:18:19.887 Test: blockdev nvme admin passthru ...passed 00:18:19.887 Test: blockdev copy ...passed 00:18:19.887 Suite: bdevio tests on: nvme1n1 00:18:19.887 Test: blockdev write read block ...passed 00:18:19.887 Test: blockdev write zeroes read block ...passed 00:18:19.887 Test: blockdev write zeroes read no split ...passed 00:18:19.887 Test: blockdev write zeroes read split ...passed 00:18:20.147 Test: blockdev write zeroes read split partial ...passed 00:18:20.147 Test: blockdev reset ...passed 00:18:20.147 Test: blockdev write read 8 blocks ...passed 00:18:20.147 Test: blockdev write read size > 128k ...passed 00:18:20.147 Test: blockdev write read invalid size ...passed 00:18:20.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:20.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:20.147 Test: blockdev write read max offset ...passed 00:18:20.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:20.147 Test: blockdev writev readv 8 blocks ...passed 00:18:20.148 Test: blockdev writev readv 30 x 1block ...passed 00:18:20.148 Test: blockdev writev readv block ...passed 00:18:20.148 Test: blockdev writev readv size > 128k ...passed 00:18:20.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:20.148 Test: blockdev comparev and writev ...passed 00:18:20.148 Test: blockdev nvme passthru rw ...passed 00:18:20.148 Test: blockdev nvme passthru vendor specific ...passed 00:18:20.148 Test: blockdev nvme admin passthru ...passed 00:18:20.148 Test: blockdev copy ...passed 00:18:20.148 Suite: bdevio tests on: nvme0n3 00:18:20.148 Test: blockdev write read block ...passed 00:18:20.148 Test: blockdev write zeroes read block ...passed 00:18:20.148 Test: blockdev write zeroes read no split ...passed 00:18:20.148 Test: blockdev write zeroes read split ...passed 00:18:20.148 Test: blockdev write zeroes read split partial ...passed 00:18:20.148 Test: blockdev reset ...passed 00:18:20.148 Test: blockdev write read 8 blocks ...passed 00:18:20.148 Test: blockdev write read size > 128k ...passed 00:18:20.148 Test: blockdev write read invalid size ...passed 00:18:20.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:20.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:20.148 Test: blockdev write read max offset ...passed 00:18:20.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:20.148 Test: blockdev writev readv 8 blocks ...passed 00:18:20.148 Test: blockdev writev readv 30 x 1block ...passed 00:18:20.148 Test: blockdev writev readv block ...passed 00:18:20.148 Test: blockdev writev readv size > 128k ...passed 00:18:20.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:20.148 Test: blockdev comparev and writev ...passed 00:18:20.148 Test: blockdev nvme passthru rw ...passed 00:18:20.148 Test: blockdev nvme passthru vendor specific ...passed 00:18:20.148 Test: blockdev nvme admin passthru ...passed 00:18:20.148 Test: blockdev copy ...passed 00:18:20.148 Suite: bdevio tests on: nvme0n2 00:18:20.148 Test: blockdev write read block ...passed 00:18:20.148 Test: blockdev write zeroes read block ...passed 00:18:20.148 Test: blockdev write zeroes read no split ...passed 00:18:20.148 Test: blockdev write zeroes read split ...passed 00:18:20.148 Test: blockdev write zeroes read split partial ...passed 00:18:20.148 Test: blockdev reset ...passed 00:18:20.148 Test: blockdev write read 8 blocks ...passed 00:18:20.148 Test: blockdev write read size > 128k ...passed 00:18:20.148 Test: blockdev write read invalid size ...passed 00:18:20.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:20.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:20.148 Test: blockdev write read max offset ...passed 00:18:20.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:20.148 Test: blockdev writev readv 8 blocks ...passed 00:18:20.148 Test: blockdev writev readv 30 x 1block ...passed 00:18:20.148 Test: blockdev writev readv block ...passed 00:18:20.148 Test: blockdev writev readv size > 128k ...passed 00:18:20.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:20.148 Test: blockdev comparev and writev ...passed 00:18:20.148 Test: blockdev nvme passthru rw ...passed 00:18:20.148 Test: blockdev nvme passthru vendor specific ...passed 00:18:20.148 Test: blockdev nvme admin passthru ...passed 00:18:20.148 Test: blockdev copy ...passed 00:18:20.148 Suite: bdevio tests on: nvme0n1 00:18:20.148 Test: blockdev write read block ...passed 00:18:20.148 Test: blockdev write zeroes read block ...passed 00:18:20.148 Test: blockdev write zeroes read no split ...passed 00:18:20.148 Test: blockdev write zeroes read split ...passed 00:18:20.407 Test: blockdev write zeroes read split partial ...passed 00:18:20.407 Test: blockdev reset ...passed 00:18:20.407 Test: blockdev write read 8 blocks ...passed 00:18:20.407 Test: blockdev write read size > 128k ...passed 00:18:20.407 Test: blockdev write read invalid size ...passed 00:18:20.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:20.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:20.407 Test: blockdev write read max offset ...passed 00:18:20.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:20.407 Test: blockdev writev readv 8 blocks ...passed 00:18:20.407 Test: blockdev writev readv 30 x 1block ...passed 00:18:20.407 Test: blockdev writev readv block ...passed 00:18:20.407 Test: blockdev writev readv size > 128k ...passed 00:18:20.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:20.407 Test: blockdev comparev and writev ...passed 00:18:20.407 Test: blockdev nvme passthru rw ...passed 00:18:20.407 Test: blockdev nvme passthru vendor specific ...passed 00:18:20.407 Test: blockdev nvme admin passthru ...passed 00:18:20.407 Test: blockdev copy ...passed 00:18:20.407 00:18:20.407 Run Summary: Type Total Ran Passed Failed Inactive 00:18:20.407 suites 6 6 n/a 0 0 00:18:20.407 tests 138 138 138 0 0 00:18:20.407 asserts 780 780 780 0 n/a 00:18:20.407 00:18:20.407 Elapsed time = 1.486 seconds 00:18:20.407 0 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73859 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73859 ']' 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73859 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73859 00:18:20.407 killing process with pid 73859 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73859' 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73859 00:18:20.407 12:02:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73859 00:18:21.788 ************************************ 00:18:21.788 END TEST bdev_bounds 00:18:21.788 ************************************ 00:18:21.788 12:02:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:21.788 00:18:21.788 real 0m2.736s 00:18:21.788 user 0m6.741s 00:18:21.788 sys 0m0.435s 00:18:21.788 12:02:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.788 12:02:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:21.788 12:02:11 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:21.788 12:02:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:21.788 12:02:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.788 12:02:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:21.788 ************************************ 00:18:21.788 START TEST bdev_nbd 00:18:21.788 ************************************ 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73923 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73923 /var/tmp/spdk-nbd.sock 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:21.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.788 12:02:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:21.788 [2024-11-27 12:02:11.592284] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:21.788 [2024-11-27 12:02:11.592434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.788 [2024-11-27 12:02:11.774060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.048 [2024-11-27 12:02:11.883237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.618 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.878 1+0 records in 00:18:22.878 1+0 records out 00:18:22.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673432 s, 6.1 MB/s 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.878 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.137 1+0 records in 00:18:23.137 1+0 records out 00:18:23.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601148 s, 6.8 MB/s 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:23.137 12:02:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.137 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.396 1+0 records in 00:18:23.396 1+0 records out 00:18:23.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768416 s, 5.3 MB/s 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.396 1+0 records in 00:18:23.396 1+0 records out 00:18:23.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082222 s, 5.0 MB/s 00:18:23.396 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.655 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.655 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.655 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.656 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.915 1+0 records in 00:18:23.915 1+0 records out 00:18:23.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112933 s, 3.6 MB/s 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.915 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.175 1+0 records in 00:18:24.175 1+0 records out 00:18:24.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000990404 s, 4.1 MB/s 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:24.175 12:02:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:24.175 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd0", 00:18:24.175 "bdev_name": "nvme0n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd1", 00:18:24.175 "bdev_name": "nvme0n2" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd2", 00:18:24.175 "bdev_name": "nvme0n3" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd3", 00:18:24.175 "bdev_name": "nvme1n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd4", 00:18:24.175 "bdev_name": "nvme2n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd5", 00:18:24.175 "bdev_name": "nvme3n1" 00:18:24.175 } 00:18:24.175 ]' 00:18:24.175 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:24.175 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd0", 00:18:24.175 "bdev_name": "nvme0n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd1", 00:18:24.175 "bdev_name": "nvme0n2" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd2", 00:18:24.175 "bdev_name": "nvme0n3" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd3", 00:18:24.175 "bdev_name": "nvme1n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd4", 00:18:24.175 "bdev_name": "nvme2n1" 00:18:24.175 }, 00:18:24.175 { 00:18:24.175 "nbd_device": "/dev/nbd5", 00:18:24.175 "bdev_name": "nvme3n1" 00:18:24.175 } 00:18:24.175 ]' 00:18:24.175 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.434 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.694 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.953 12:02:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.212 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.471 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.730 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:25.990 12:02:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:26.249 /dev/nbd0 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.249 1+0 records in 00:18:26.249 1+0 records out 00:18:26.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628722 s, 6.5 MB/s 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.249 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:26.250 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:18:26.508 /dev/nbd1 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.508 1+0 records in 00:18:26.508 1+0 records out 00:18:26.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068817 s, 6.0 MB/s 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:26.508 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:18:26.767 /dev/nbd10 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.767 1+0 records in 00:18:26.767 1+0 records out 00:18:26.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735056 s, 5.6 MB/s 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:26.767 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:18:27.027 /dev/nbd11 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.027 1+0 records in 00:18:27.027 1+0 records out 00:18:27.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838093 s, 4.9 MB/s 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:27.027 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.028 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:27.028 12:02:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:27.287 /dev/nbd12 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.287 1+0 records in 00:18:27.287 1+0 records out 00:18:27.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00092231 s, 4.4 MB/s 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:27.287 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:27.546 /dev/nbd13 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.546 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.546 1+0 records in 00:18:27.546 1+0 records out 00:18:27.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818118 s, 5.0 MB/s 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:27.547 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:27.806 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd0", 00:18:27.806 "bdev_name": "nvme0n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd1", 00:18:27.806 "bdev_name": "nvme0n2" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd10", 00:18:27.806 "bdev_name": "nvme0n3" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd11", 00:18:27.806 "bdev_name": "nvme1n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd12", 00:18:27.806 "bdev_name": "nvme2n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd13", 00:18:27.806 "bdev_name": "nvme3n1" 00:18:27.806 } 00:18:27.806 ]' 00:18:27.806 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd0", 00:18:27.806 "bdev_name": "nvme0n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd1", 00:18:27.806 "bdev_name": "nvme0n2" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd10", 00:18:27.806 "bdev_name": "nvme0n3" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd11", 00:18:27.806 "bdev_name": "nvme1n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd12", 00:18:27.806 "bdev_name": "nvme2n1" 00:18:27.806 }, 00:18:27.806 { 00:18:27.806 "nbd_device": "/dev/nbd13", 00:18:27.806 "bdev_name": "nvme3n1" 00:18:27.806 } 00:18:27.806 ]' 00:18:27.806 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:27.806 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:27.806 /dev/nbd1 00:18:27.806 /dev/nbd10 00:18:27.806 /dev/nbd11 00:18:27.806 /dev/nbd12 00:18:27.806 /dev/nbd13' 00:18:27.806 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:27.806 /dev/nbd1 00:18:27.806 /dev/nbd10 00:18:27.807 /dev/nbd11 00:18:27.807 /dev/nbd12 00:18:27.807 /dev/nbd13' 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:27.807 256+0 records in 00:18:27.807 256+0 records out 00:18:27.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01164 s, 90.1 MB/s 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:27.807 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:28.067 256+0 records in 00:18:28.067 256+0 records out 00:18:28.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125929 s, 8.3 MB/s 00:18:28.067 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:28.067 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:28.067 256+0 records in 00:18:28.067 256+0 records out 00:18:28.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130175 s, 8.1 MB/s 00:18:28.067 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:28.067 12:02:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:28.326 256+0 records in 00:18:28.326 256+0 records out 00:18:28.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1312 s, 8.0 MB/s 00:18:28.326 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:28.326 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:28.326 256+0 records in 00:18:28.326 256+0 records out 00:18:28.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127543 s, 8.2 MB/s 00:18:28.326 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:28.326 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:28.585 256+0 records in 00:18:28.585 256+0 records out 00:18:28.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156971 s, 6.7 MB/s 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:28.585 256+0 records in 00:18:28.585 256+0 records out 00:18:28.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129817 s, 8.1 MB/s 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.585 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.844 12:02:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.102 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.360 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.618 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:29.877 12:02:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:30.136 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:30.136 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:30.136 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:30.136 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:30.396 malloc_lvol_verify 00:18:30.396 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:30.654 34bfab45-8a5c-40ed-beeb-57d5e693f754 00:18:30.654 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:30.913 a05a75bd-dfe2-43bb-a418-333e0548e7a4 00:18:30.913 12:02:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:31.173 /dev/nbd0 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:31.173 mke2fs 1.47.0 (5-Feb-2023) 00:18:31.173 Discarding device blocks: 0/4096 done 00:18:31.173 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:31.173 00:18:31.173 Allocating group tables: 0/1 done 00:18:31.173 Writing inode tables: 0/1 done 00:18:31.173 Creating journal (1024 blocks): done 00:18:31.173 Writing superblocks and filesystem accounting information: 0/1 done 00:18:31.173 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:31.173 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73923 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73923 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.433 killing process with pid 73923 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73923 00:18:31.433 12:02:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73923 00:18:32.812 12:02:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:32.812 00:18:32.812 real 0m10.958s 00:18:32.812 user 0m13.935s 00:18:32.812 sys 0m4.779s 00:18:32.812 ************************************ 00:18:32.812 END TEST bdev_nbd 00:18:32.812 ************************************ 00:18:32.812 12:02:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.812 12:02:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:32.812 12:02:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:32.812 12:02:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:18:32.812 12:02:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:18:32.812 12:02:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:32.812 12:02:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.812 12:02:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.812 12:02:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:32.812 ************************************ 00:18:32.812 START TEST bdev_fio 00:18:32.812 ************************************ 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:32.812 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:32.812 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:32.813 ************************************ 00:18:32.813 START TEST bdev_fio_rw_verify 00:18:32.813 ************************************ 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.813 12:02:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:33.072 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:33.072 fio-3.35 00:18:33.072 Starting 6 threads 00:18:45.287 00:18:45.287 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74328: Wed Nov 27 12:02:33 2024 00:18:45.287 read: IOPS=34.1k, BW=133MiB/s (140MB/s)(1331MiB/10001msec) 00:18:45.287 slat (usec): min=2, max=1192, avg= 7.76, stdev= 8.09 00:18:45.287 clat (usec): min=102, max=7059, avg=519.75, stdev=242.07 00:18:45.287 lat (usec): min=107, max=7090, avg=527.51, stdev=243.38 00:18:45.287 clat percentiles (usec): 00:18:45.287 | 50.000th=[ 498], 99.000th=[ 1205], 99.900th=[ 2245], 99.990th=[ 3720], 00:18:45.287 | 99.999th=[ 7046] 00:18:45.287 write: IOPS=34.4k, BW=135MiB/s (141MB/s)(1346MiB/10001msec); 0 zone resets 00:18:45.287 slat (usec): min=10, max=4046, avg=26.59, stdev=38.06 00:18:45.287 clat (usec): min=82, max=6554, avg=627.41, stdev=272.00 00:18:45.287 lat (usec): min=100, max=6567, avg=654.01, stdev=278.02 00:18:45.287 clat percentiles (usec): 00:18:45.287 | 50.000th=[ 603], 99.000th=[ 1450], 99.900th=[ 2278], 99.990th=[ 4015], 00:18:45.287 | 99.999th=[ 5473] 00:18:45.287 bw ( KiB/s): min=109742, max=161074, per=100.00%, avg=137950.16, stdev=2535.38, samples=114 00:18:45.287 iops : min=27434, max=40268, avg=34487.00, stdev=633.87, samples=114 00:18:45.287 lat (usec) : 100=0.01%, 250=7.76%, 500=34.10%, 750=37.83%, 1000=14.99% 00:18:45.287 lat (msec) : 2=5.15%, 4=0.16%, 10=0.01% 00:18:45.287 cpu : usr=54.09%, sys=30.16%, ctx=8401, majf=0, minf=28206 00:18:45.287 IO depths : 1=11.8%, 2=24.1%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.287 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.287 issued rwts: total=340852,344493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.287 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:45.287 00:18:45.287 Run status group 0 (all jobs): 00:18:45.287 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=1331MiB (1396MB), run=10001-10001msec 00:18:45.287 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1346MiB (1411MB), run=10001-10001msec 00:18:45.287 ----------------------------------------------------- 00:18:45.287 Suppressions used: 00:18:45.287 count bytes template 00:18:45.287 6 48 /usr/src/fio/parse.c 00:18:45.287 3411 327456 /usr/src/fio/iolog.c 00:18:45.287 1 8 libtcmalloc_minimal.so 00:18:45.287 1 904 libcrypto.so 00:18:45.287 ----------------------------------------------------- 00:18:45.287 00:18:45.287 00:18:45.287 real 0m12.656s 00:18:45.287 user 0m34.586s 00:18:45.287 sys 0m18.536s 00:18:45.287 12:02:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.287 ************************************ 00:18:45.287 END TEST bdev_fio_rw_verify 00:18:45.287 ************************************ 00:18:45.287 12:02:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:45.287 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c648c267-4eb2-4348-914f-6d69ef62b881"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c648c267-4eb2-4348-914f-6d69ef62b881",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "4523b46d-aaf3-43cf-b210-92fd4141b7f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4523b46d-aaf3-43cf-b210-92fd4141b7f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "31991416-aa7f-4ce9-a645-240d7d6fa99c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31991416-aa7f-4ce9-a645-240d7d6fa99c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "527ebfa3-9eeb-4dbd-8756-eddbaaa8640b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "527ebfa3-9eeb-4dbd-8756-eddbaaa8640b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36166fcf-c3dc-434a-9c94-97ae984cc90e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "36166fcf-c3dc-434a-9c94-97ae984cc90e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ca08fd55-9300-4267-bb6f-5853c8ebcbe3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ca08fd55-9300-4267-bb6f-5853c8ebcbe3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:45.548 /home/vagrant/spdk_repo/spdk 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:45.548 00:18:45.548 real 0m12.900s 00:18:45.548 user 0m34.715s 00:18:45.548 sys 0m18.655s 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.548 ************************************ 00:18:45.548 END TEST bdev_fio 00:18:45.548 ************************************ 00:18:45.548 12:02:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:45.548 12:02:35 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:45.548 12:02:35 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:45.548 12:02:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:45.548 12:02:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.548 12:02:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:45.548 ************************************ 00:18:45.548 START TEST bdev_verify 00:18:45.548 ************************************ 00:18:45.548 12:02:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:45.808 [2024-11-27 12:02:35.600263] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:45.808 [2024-11-27 12:02:35.600407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74508 ] 00:18:45.808 [2024-11-27 12:02:35.787687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:46.067 [2024-11-27 12:02:35.920173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.067 [2024-11-27 12:02:35.920206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.677 Running I/O for 5 seconds... 00:18:49.029 21984.00 IOPS, 85.88 MiB/s [2024-11-27T12:02:40.020Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-27T12:02:40.958Z] 19957.33 IOPS, 77.96 MiB/s [2024-11-27T12:02:41.895Z] 20192.00 IOPS, 78.88 MiB/s [2024-11-27T12:02:41.895Z] 20038.40 IOPS, 78.28 MiB/s 00:18:51.842 Latency(us) 00:18:51.842 [2024-11-27T12:02:41.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.842 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0x80000 00:18:51.842 nvme0n1 : 5.06 1568.75 6.13 0.00 0.00 81455.43 10422.59 92645.27 00:18:51.842 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x80000 length 0x80000 00:18:51.842 nvme0n1 : 5.06 1466.88 5.73 0.00 0.00 87152.55 7211.59 91381.92 00:18:51.842 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0x80000 00:18:51.842 nvme0n2 : 5.05 1572.36 6.14 0.00 0.00 81136.36 11738.58 88434.12 00:18:51.842 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x80000 length 0x80000 00:18:51.842 nvme0n2 : 5.05 1444.56 5.64 0.00 0.00 88404.21 11475.38 92645.27 00:18:51.842 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0x80000 00:18:51.842 nvme0n3 : 5.07 1564.19 6.11 0.00 0.00 81427.38 17370.99 75379.56 00:18:51.842 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x80000 length 0x80000 00:18:51.842 nvme0n3 : 5.06 1442.37 5.63 0.00 0.00 88445.77 13791.51 91381.92 00:18:51.842 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0x20000 00:18:51.842 nvme1n1 : 5.08 1563.67 6.11 0.00 0.00 81327.73 8369.66 82959.63 00:18:51.842 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x20000 length 0x20000 00:18:51.842 nvme1n1 : 5.04 1446.58 5.65 0.00 0.00 88118.20 9843.56 85065.20 00:18:51.842 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0xbd0bd 00:18:51.842 nvme2n1 : 5.06 2438.97 9.53 0.00 0.00 52016.18 5106.02 65693.92 00:18:51.842 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:51.842 nvme2n1 : 5.06 2280.84 8.91 0.00 0.00 55731.75 7001.03 68641.72 00:18:51.842 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0x0 length 0xa0000 00:18:51.842 nvme3n1 : 5.07 1565.77 6.12 0.00 0.00 80955.89 12317.61 91381.92 00:18:51.842 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.842 Verification LBA range: start 0xa0000 length 0xa0000 00:18:51.842 nvme3n1 : 5.04 1447.52 5.65 0.00 0.00 87869.84 5685.05 85907.43 00:18:51.842 [2024-11-27T12:02:41.895Z] =================================================================================================================== 00:18:51.842 [2024-11-27T12:02:41.895Z] Total : 19802.44 77.35 0.00 0.00 77172.46 5106.02 92645.27 00:18:52.780 00:18:52.780 real 0m7.221s 00:18:52.780 user 0m11.292s 00:18:52.780 sys 0m1.832s 00:18:52.780 12:02:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.780 ************************************ 00:18:52.780 END TEST bdev_verify 00:18:52.780 ************************************ 00:18:52.780 12:02:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:52.780 12:02:42 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:52.780 12:02:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:52.780 12:02:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.780 12:02:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.780 ************************************ 00:18:52.780 START TEST bdev_verify_big_io 00:18:52.780 ************************************ 00:18:52.780 12:02:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:53.040 [2024-11-27 12:02:42.898097] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:53.040 [2024-11-27 12:02:42.898241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74611 ] 00:18:53.040 [2024-11-27 12:02:43.088886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.299 [2024-11-27 12:02:43.201034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.299 [2024-11-27 12:02:43.201053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.868 Running I/O for 5 seconds... 00:18:59.697 2208.00 IOPS, 138.00 MiB/s [2024-11-27T12:02:49.750Z] 4155.00 IOPS, 259.69 MiB/s [2024-11-27T12:02:50.009Z] 3934.33 IOPS, 245.90 MiB/s 00:18:59.956 Latency(us) 00:18:59.956 [2024-11-27T12:02:50.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.956 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0x8000 00:18:59.956 nvme0n1 : 5.61 136.94 8.56 0.00 0.00 896884.57 6658.88 1549702.68 00:18:59.956 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x8000 length 0x8000 00:18:59.956 nvme0n1 : 5.39 230.14 14.38 0.00 0.00 536224.23 96856.42 528920.26 00:18:59.956 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0x8000 00:18:59.956 nvme0n2 : 5.75 108.47 6.78 0.00 0.00 1084160.88 37900.34 1691197.28 00:18:59.956 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x8000 length 0x8000 00:18:59.956 nvme0n2 : 5.46 240.18 15.01 0.00 0.00 516064.87 69483.95 451435.13 00:18:59.956 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0x8000 00:18:59.956 nvme0n3 : 5.61 111.19 6.95 0.00 0.00 1025281.73 81696.28 1792264.84 00:18:59.956 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x8000 length 0x8000 00:18:59.956 nvme0n3 : 5.47 242.61 15.16 0.00 0.00 504839.96 10054.12 727686.48 00:18:59.956 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0x2000 00:18:59.956 nvme1n1 : 5.80 129.67 8.10 0.00 0.00 848251.89 37268.67 2075254.03 00:18:59.956 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x2000 length 0x2000 00:18:59.956 nvme1n1 : 5.47 237.02 14.81 0.00 0.00 507931.78 69483.95 448066.21 00:18:59.956 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0xbd0b 00:18:59.956 nvme2n1 : 5.95 188.28 11.77 0.00 0.00 566877.59 3816.35 1266713.50 00:18:59.956 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:59.956 nvme2n1 : 5.46 246.17 15.39 0.00 0.00 482431.64 9527.72 1206072.96 00:18:59.956 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0x0 length 0xa000 00:18:59.956 nvme3n1 : 6.08 249.95 15.62 0.00 0.00 414303.21 470.46 2331291.86 00:18:59.956 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:59.956 Verification LBA range: start 0xa000 length 0xa000 00:18:59.956 nvme3n1 : 5.48 257.14 16.07 0.00 0.00 456319.88 3632.12 623249.99 00:18:59.956 [2024-11-27T12:02:50.009Z] =================================================================================================================== 00:18:59.956 [2024-11-27T12:02:50.009Z] Total : 2377.76 148.61 0.00 0.00 590086.08 470.46 2331291.86 00:19:01.332 00:19:01.332 real 0m8.453s 00:19:01.332 user 0m15.349s 00:19:01.332 sys 0m0.566s 00:19:01.332 ************************************ 00:19:01.332 END TEST bdev_verify_big_io 00:19:01.332 ************************************ 00:19:01.332 12:02:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.332 12:02:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 12:02:51 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:01.332 12:02:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:01.332 12:02:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.332 12:02:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 ************************************ 00:19:01.332 START TEST bdev_write_zeroes 00:19:01.332 ************************************ 00:19:01.332 12:02:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:01.591 [2024-11-27 12:02:51.425638] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:01.591 [2024-11-27 12:02:51.425778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74721 ] 00:19:01.591 [2024-11-27 12:02:51.604211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.849 [2024-11-27 12:02:51.716828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.419 Running I/O for 1 seconds... 00:19:03.358 41952.00 IOPS, 163.88 MiB/s 00:19:03.358 Latency(us) 00:19:03.358 [2024-11-27T12:02:53.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.358 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme0n1 : 1.03 6308.68 24.64 0.00 0.00 20272.01 8474.94 33268.07 00:19:03.358 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme0n2 : 1.04 6301.67 24.62 0.00 0.00 20282.51 8632.85 36005.32 00:19:03.358 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme0n3 : 1.04 6294.47 24.59 0.00 0.00 20292.50 8632.85 38532.01 00:19:03.358 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme1n1 : 1.04 6287.44 24.56 0.00 0.00 20302.62 8580.22 40005.91 00:19:03.358 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme2n1 : 1.03 10260.29 40.08 0.00 0.00 12430.04 4632.26 32004.73 00:19:03.358 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:03.358 nvme3n1 : 1.03 6315.88 24.67 0.00 0.00 20096.36 3737.39 31162.50 00:19:03.358 [2024-11-27T12:02:53.411Z] =================================================================================================================== 00:19:03.358 [2024-11-27T12:02:53.411Z] Total : 41768.44 163.16 0.00 0.00 18333.48 3737.39 40005.91 00:19:04.298 00:19:04.298 real 0m2.973s 00:19:04.298 user 0m2.245s 00:19:04.298 sys 0m0.543s 00:19:04.298 12:02:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.298 ************************************ 00:19:04.298 END TEST bdev_write_zeroes 00:19:04.298 ************************************ 00:19:04.298 12:02:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:04.558 12:02:54 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:04.558 12:02:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:04.558 12:02:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.558 12:02:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.558 ************************************ 00:19:04.558 START TEST bdev_json_nonenclosed 00:19:04.558 ************************************ 00:19:04.558 12:02:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:04.558 [2024-11-27 12:02:54.468282] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:04.558 [2024-11-27 12:02:54.468408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74780 ] 00:19:04.816 [2024-11-27 12:02:54.644106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.816 [2024-11-27 12:02:54.751803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.816 [2024-11-27 12:02:54.751896] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:04.816 [2024-11-27 12:02:54.751917] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:04.816 [2024-11-27 12:02:54.751929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:05.075 00:19:05.075 real 0m0.620s 00:19:05.075 user 0m0.378s 00:19:05.075 sys 0m0.137s 00:19:05.075 12:02:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.075 ************************************ 00:19:05.075 END TEST bdev_json_nonenclosed 00:19:05.075 ************************************ 00:19:05.075 12:02:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:05.075 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.075 12:02:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:05.075 12:02:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.075 12:02:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.075 ************************************ 00:19:05.075 START TEST bdev_json_nonarray 00:19:05.075 ************************************ 00:19:05.075 12:02:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.334 [2024-11-27 12:02:55.160580] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:05.334 [2024-11-27 12:02:55.160689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74811 ] 00:19:05.334 [2024-11-27 12:02:55.340169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.593 [2024-11-27 12:02:55.447990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.593 [2024-11-27 12:02:55.448097] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:05.593 [2024-11-27 12:02:55.448118] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:05.593 [2024-11-27 12:02:55.448131] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:05.852 ************************************ 00:19:05.852 END TEST bdev_json_nonarray 00:19:05.852 ************************************ 00:19:05.852 00:19:05.852 real 0m0.614s 00:19:05.852 user 0m0.364s 00:19:05.852 sys 0m0.145s 00:19:05.852 12:02:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.852 12:02:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:05.852 12:02:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:06.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.328 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.328 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.328 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.588 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.588 00:19:09.588 real 0m58.356s 00:19:09.588 user 1m33.470s 00:19:09.588 sys 0m35.163s 00:19:09.588 12:02:59 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.588 12:02:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.588 ************************************ 00:19:09.588 END TEST blockdev_xnvme 00:19:09.588 ************************************ 00:19:09.588 12:02:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:09.588 12:02:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.588 12:02:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.588 12:02:59 -- common/autotest_common.sh@10 -- # set +x 00:19:09.588 ************************************ 00:19:09.588 START TEST ublk 00:19:09.588 ************************************ 00:19:09.588 12:02:59 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:09.848 * Looking for test storage... 00:19:09.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.848 12:02:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.848 12:02:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.848 12:02:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.848 12:02:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.848 12:02:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.848 12:02:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:09.848 12:02:59 ublk -- scripts/common.sh@345 -- # : 1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.848 12:02:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.848 12:02:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@353 -- # local d=1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.848 12:02:59 ublk -- scripts/common.sh@355 -- # echo 1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.848 12:02:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@353 -- # local d=2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.848 12:02:59 ublk -- scripts/common.sh@355 -- # echo 2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.848 12:02:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.848 12:02:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.848 12:02:59 ublk -- scripts/common.sh@368 -- # return 0 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.848 12:02:59 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:09.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.848 --rc genhtml_branch_coverage=1 00:19:09.848 --rc genhtml_function_coverage=1 00:19:09.849 --rc genhtml_legend=1 00:19:09.849 --rc geninfo_all_blocks=1 00:19:09.849 --rc geninfo_unexecuted_blocks=1 00:19:09.849 00:19:09.849 ' 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:09.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.849 --rc genhtml_branch_coverage=1 00:19:09.849 --rc genhtml_function_coverage=1 00:19:09.849 --rc genhtml_legend=1 00:19:09.849 --rc geninfo_all_blocks=1 00:19:09.849 --rc geninfo_unexecuted_blocks=1 00:19:09.849 00:19:09.849 ' 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:09.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.849 --rc genhtml_branch_coverage=1 00:19:09.849 --rc genhtml_function_coverage=1 00:19:09.849 --rc genhtml_legend=1 00:19:09.849 --rc geninfo_all_blocks=1 00:19:09.849 --rc geninfo_unexecuted_blocks=1 00:19:09.849 00:19:09.849 ' 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:09.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.849 --rc genhtml_branch_coverage=1 00:19:09.849 --rc genhtml_function_coverage=1 00:19:09.849 --rc genhtml_legend=1 00:19:09.849 --rc geninfo_all_blocks=1 00:19:09.849 --rc geninfo_unexecuted_blocks=1 00:19:09.849 00:19:09.849 ' 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:09.849 12:02:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:09.849 12:02:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:09.849 12:02:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:09.849 12:02:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:09.849 12:02:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:09.849 12:02:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:09.849 12:02:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:09.849 12:02:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:09.849 12:02:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.849 12:02:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:09.849 ************************************ 00:19:09.849 START TEST test_save_ublk_config 00:19:09.849 ************************************ 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75101 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75101 00:19:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75101 ']' 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.849 12:02:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:10.110 [2024-11-27 12:02:59.985085] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:10.110 [2024-11-27 12:02:59.985211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 00:19:10.370 [2024-11-27 12:03:00.174270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.370 [2024-11-27 12:03:00.318359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.309 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:11.309 [2024-11-27 12:03:01.344396] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:11.309 [2024-11-27 12:03:01.345665] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:11.569 malloc0 00:19:11.569 [2024-11-27 12:03:01.439530] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:11.569 [2024-11-27 12:03:01.439633] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:11.569 [2024-11-27 12:03:01.439647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:11.569 [2024-11-27 12:03:01.439657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:11.569 [2024-11-27 12:03:01.448532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:11.569 [2024-11-27 12:03:01.448560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:11.569 [2024-11-27 12:03:01.455416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:11.569 [2024-11-27 12:03:01.455532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:11.569 [2024-11-27 12:03:01.472405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:11.569 0 00:19:11.569 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.569 12:03:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:11.569 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.569 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:11.829 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.829 12:03:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:11.829 "subsystems": [ 00:19:11.829 { 00:19:11.829 "subsystem": "fsdev", 00:19:11.829 "config": [ 00:19:11.829 { 00:19:11.829 "method": "fsdev_set_opts", 00:19:11.829 "params": { 00:19:11.829 "fsdev_io_pool_size": 65535, 00:19:11.829 "fsdev_io_cache_size": 256 00:19:11.829 } 00:19:11.829 } 00:19:11.829 ] 00:19:11.829 }, 00:19:11.829 { 00:19:11.829 "subsystem": "keyring", 00:19:11.829 "config": [] 00:19:11.829 }, 00:19:11.829 { 00:19:11.829 "subsystem": "iobuf", 00:19:11.829 "config": [ 00:19:11.829 { 00:19:11.829 "method": "iobuf_set_options", 00:19:11.829 "params": { 00:19:11.829 "small_pool_count": 8192, 00:19:11.829 "large_pool_count": 1024, 00:19:11.829 "small_bufsize": 8192, 00:19:11.829 "large_bufsize": 135168, 00:19:11.829 "enable_numa": false 00:19:11.829 } 00:19:11.829 } 00:19:11.829 ] 00:19:11.829 }, 00:19:11.829 { 00:19:11.829 "subsystem": "sock", 00:19:11.829 "config": [ 00:19:11.829 { 00:19:11.829 "method": "sock_set_default_impl", 00:19:11.829 "params": { 00:19:11.829 "impl_name": "posix" 00:19:11.829 } 00:19:11.829 }, 00:19:11.829 { 00:19:11.829 "method": "sock_impl_set_options", 00:19:11.829 "params": { 00:19:11.829 "impl_name": "ssl", 00:19:11.829 "recv_buf_size": 4096, 00:19:11.829 "send_buf_size": 4096, 00:19:11.829 "enable_recv_pipe": true, 00:19:11.829 "enable_quickack": false, 00:19:11.829 "enable_placement_id": 0, 00:19:11.829 "enable_zerocopy_send_server": true, 00:19:11.829 "enable_zerocopy_send_client": false, 00:19:11.829 "zerocopy_threshold": 0, 00:19:11.829 "tls_version": 0, 00:19:11.829 "enable_ktls": false 00:19:11.829 } 00:19:11.829 }, 00:19:11.829 { 00:19:11.829 "method": "sock_impl_set_options", 00:19:11.829 "params": { 00:19:11.829 "impl_name": "posix", 00:19:11.829 "recv_buf_size": 2097152, 00:19:11.829 "send_buf_size": 2097152, 00:19:11.829 "enable_recv_pipe": true, 00:19:11.829 "enable_quickack": false, 00:19:11.829 "enable_placement_id": 0, 00:19:11.830 "enable_zerocopy_send_server": true, 00:19:11.830 "enable_zerocopy_send_client": false, 00:19:11.830 "zerocopy_threshold": 0, 00:19:11.830 "tls_version": 0, 00:19:11.830 "enable_ktls": false 00:19:11.830 } 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "vmd", 00:19:11.830 "config": [] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "accel", 00:19:11.830 "config": [ 00:19:11.830 { 00:19:11.830 "method": "accel_set_options", 00:19:11.830 "params": { 00:19:11.830 "small_cache_size": 128, 00:19:11.830 "large_cache_size": 16, 00:19:11.830 "task_count": 2048, 00:19:11.830 "sequence_count": 2048, 00:19:11.830 "buf_count": 2048 00:19:11.830 } 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "bdev", 00:19:11.830 "config": [ 00:19:11.830 { 00:19:11.830 "method": "bdev_set_options", 00:19:11.830 "params": { 00:19:11.830 "bdev_io_pool_size": 65535, 00:19:11.830 "bdev_io_cache_size": 256, 00:19:11.830 "bdev_auto_examine": true, 00:19:11.830 "iobuf_small_cache_size": 128, 00:19:11.830 "iobuf_large_cache_size": 16 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_raid_set_options", 00:19:11.830 "params": { 00:19:11.830 "process_window_size_kb": 1024, 00:19:11.830 "process_max_bandwidth_mb_sec": 0 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_iscsi_set_options", 00:19:11.830 "params": { 00:19:11.830 "timeout_sec": 30 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_nvme_set_options", 00:19:11.830 "params": { 00:19:11.830 "action_on_timeout": "none", 00:19:11.830 "timeout_us": 0, 00:19:11.830 "timeout_admin_us": 0, 00:19:11.830 "keep_alive_timeout_ms": 10000, 00:19:11.830 "arbitration_burst": 0, 00:19:11.830 "low_priority_weight": 0, 00:19:11.830 "medium_priority_weight": 0, 00:19:11.830 "high_priority_weight": 0, 00:19:11.830 "nvme_adminq_poll_period_us": 10000, 00:19:11.830 "nvme_ioq_poll_period_us": 0, 00:19:11.830 "io_queue_requests": 0, 00:19:11.830 "delay_cmd_submit": true, 00:19:11.830 "transport_retry_count": 4, 00:19:11.830 "bdev_retry_count": 3, 00:19:11.830 "transport_ack_timeout": 0, 00:19:11.830 "ctrlr_loss_timeout_sec": 0, 00:19:11.830 "reconnect_delay_sec": 0, 00:19:11.830 "fast_io_fail_timeout_sec": 0, 00:19:11.830 "disable_auto_failback": false, 00:19:11.830 "generate_uuids": false, 00:19:11.830 "transport_tos": 0, 00:19:11.830 "nvme_error_stat": false, 00:19:11.830 "rdma_srq_size": 0, 00:19:11.830 "io_path_stat": false, 00:19:11.830 "allow_accel_sequence": false, 00:19:11.830 "rdma_max_cq_size": 0, 00:19:11.830 "rdma_cm_event_timeout_ms": 0, 00:19:11.830 "dhchap_digests": [ 00:19:11.830 "sha256", 00:19:11.830 "sha384", 00:19:11.830 "sha512" 00:19:11.830 ], 00:19:11.830 "dhchap_dhgroups": [ 00:19:11.830 "null", 00:19:11.830 "ffdhe2048", 00:19:11.830 "ffdhe3072", 00:19:11.830 "ffdhe4096", 00:19:11.830 "ffdhe6144", 00:19:11.830 "ffdhe8192" 00:19:11.830 ] 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_nvme_set_hotplug", 00:19:11.830 "params": { 00:19:11.830 "period_us": 100000, 00:19:11.830 "enable": false 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_malloc_create", 00:19:11.830 "params": { 00:19:11.830 "name": "malloc0", 00:19:11.830 "num_blocks": 8192, 00:19:11.830 "block_size": 4096, 00:19:11.830 "physical_block_size": 4096, 00:19:11.830 "uuid": "4b17fecd-ed6b-42bb-ae33-bc5f5e5916a8", 00:19:11.830 "optimal_io_boundary": 0, 00:19:11.830 "md_size": 0, 00:19:11.830 "dif_type": 0, 00:19:11.830 "dif_is_head_of_md": false, 00:19:11.830 "dif_pi_format": 0 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "bdev_wait_for_examine" 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "scsi", 00:19:11.830 "config": null 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "scheduler", 00:19:11.830 "config": [ 00:19:11.830 { 00:19:11.830 "method": "framework_set_scheduler", 00:19:11.830 "params": { 00:19:11.830 "name": "static" 00:19:11.830 } 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "vhost_scsi", 00:19:11.830 "config": [] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "vhost_blk", 00:19:11.830 "config": [] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "ublk", 00:19:11.830 "config": [ 00:19:11.830 { 00:19:11.830 "method": "ublk_create_target", 00:19:11.830 "params": { 00:19:11.830 "cpumask": "1" 00:19:11.830 } 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "method": "ublk_start_disk", 00:19:11.830 "params": { 00:19:11.830 "bdev_name": "malloc0", 00:19:11.830 "ublk_id": 0, 00:19:11.830 "num_queues": 1, 00:19:11.830 "queue_depth": 128 00:19:11.830 } 00:19:11.830 } 00:19:11.830 ] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "nbd", 00:19:11.830 "config": [] 00:19:11.830 }, 00:19:11.830 { 00:19:11.830 "subsystem": "nvmf", 00:19:11.830 "config": [ 00:19:11.830 { 00:19:11.830 "method": "nvmf_set_config", 00:19:11.830 "params": { 00:19:11.830 "discovery_filter": "match_any", 00:19:11.830 "admin_cmd_passthru": { 00:19:11.830 "identify_ctrlr": false 00:19:11.830 }, 00:19:11.830 "dhchap_digests": [ 00:19:11.830 "sha256", 00:19:11.830 "sha384", 00:19:11.830 "sha512" 00:19:11.830 ], 00:19:11.830 "dhchap_dhgroups": [ 00:19:11.830 "null", 00:19:11.830 "ffdhe2048", 00:19:11.830 "ffdhe3072", 00:19:11.830 "ffdhe4096", 00:19:11.830 "ffdhe6144", 00:19:11.831 "ffdhe8192" 00:19:11.831 ] 00:19:11.831 } 00:19:11.831 }, 00:19:11.831 { 00:19:11.831 "method": "nvmf_set_max_subsystems", 00:19:11.831 "params": { 00:19:11.831 "max_subsystems": 1024 00:19:11.831 } 00:19:11.831 }, 00:19:11.831 { 00:19:11.831 "method": "nvmf_set_crdt", 00:19:11.831 "params": { 00:19:11.831 "crdt1": 0, 00:19:11.831 "crdt2": 0, 00:19:11.831 "crdt3": 0 00:19:11.831 } 00:19:11.831 } 00:19:11.831 ] 00:19:11.831 }, 00:19:11.831 { 00:19:11.831 "subsystem": "iscsi", 00:19:11.831 "config": [ 00:19:11.831 { 00:19:11.831 "method": "iscsi_set_options", 00:19:11.831 "params": { 00:19:11.831 "node_base": "iqn.2016-06.io.spdk", 00:19:11.831 "max_sessions": 128, 00:19:11.831 "max_connections_per_session": 2, 00:19:11.831 "max_queue_depth": 64, 00:19:11.831 "default_time2wait": 2, 00:19:11.831 "default_time2retain": 20, 00:19:11.831 "first_burst_length": 8192, 00:19:11.831 "immediate_data": true, 00:19:11.831 "allow_duplicated_isid": false, 00:19:11.831 "error_recovery_level": 0, 00:19:11.831 "nop_timeout": 60, 00:19:11.831 "nop_in_interval": 30, 00:19:11.831 "disable_chap": false, 00:19:11.831 "require_chap": false, 00:19:11.831 "mutual_chap": false, 00:19:11.831 "chap_group": 0, 00:19:11.831 "max_large_datain_per_connection": 64, 00:19:11.831 "max_r2t_per_connection": 4, 00:19:11.831 "pdu_pool_size": 36864, 00:19:11.831 "immediate_data_pool_size": 16384, 00:19:11.831 "data_out_pool_size": 2048 00:19:11.831 } 00:19:11.831 } 00:19:11.831 ] 00:19:11.831 } 00:19:11.831 ] 00:19:11.831 }' 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75101 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75101 ']' 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75101 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75101 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75101' 00:19:11.831 killing process with pid 75101 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75101 00:19:11.831 12:03:01 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75101 00:19:13.739 [2024-11-27 12:03:03.342669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:13.739 [2024-11-27 12:03:03.370489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:13.739 [2024-11-27 12:03:03.370637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:13.739 [2024-11-27 12:03:03.378416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:13.739 [2024-11-27 12:03:03.378500] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:13.739 [2024-11-27 12:03:03.378522] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:13.739 [2024-11-27 12:03:03.378553] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:13.739 [2024-11-27 12:03:03.378717] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75176 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75176 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75176 ']' 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:15.647 12:03:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:15.647 "subsystems": [ 00:19:15.647 { 00:19:15.647 "subsystem": "fsdev", 00:19:15.647 "config": [ 00:19:15.647 { 00:19:15.647 "method": "fsdev_set_opts", 00:19:15.647 "params": { 00:19:15.647 "fsdev_io_pool_size": 65535, 00:19:15.647 "fsdev_io_cache_size": 256 00:19:15.647 } 00:19:15.647 } 00:19:15.647 ] 00:19:15.647 }, 00:19:15.647 { 00:19:15.647 "subsystem": "keyring", 00:19:15.647 "config": [] 00:19:15.647 }, 00:19:15.647 { 00:19:15.648 "subsystem": "iobuf", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "iobuf_set_options", 00:19:15.648 "params": { 00:19:15.648 "small_pool_count": 8192, 00:19:15.648 "large_pool_count": 1024, 00:19:15.648 "small_bufsize": 8192, 00:19:15.648 "large_bufsize": 135168, 00:19:15.648 "enable_numa": false 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "sock", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "sock_set_default_impl", 00:19:15.648 "params": { 00:19:15.648 "impl_name": "posix" 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "sock_impl_set_options", 00:19:15.648 "params": { 00:19:15.648 "impl_name": "ssl", 00:19:15.648 "recv_buf_size": 4096, 00:19:15.648 "send_buf_size": 4096, 00:19:15.648 "enable_recv_pipe": true, 00:19:15.648 "enable_quickack": false, 00:19:15.648 "enable_placement_id": 0, 00:19:15.648 "enable_zerocopy_send_server": true, 00:19:15.648 "enable_zerocopy_send_client": false, 00:19:15.648 "zerocopy_threshold": 0, 00:19:15.648 "tls_version": 0, 00:19:15.648 "enable_ktls": false 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "sock_impl_set_options", 00:19:15.648 "params": { 00:19:15.648 "impl_name": "posix", 00:19:15.648 "recv_buf_size": 2097152, 00:19:15.648 "send_buf_size": 2097152, 00:19:15.648 "enable_recv_pipe": true, 00:19:15.648 "enable_quickack": false, 00:19:15.648 "enable_placement_id": 0, 00:19:15.648 "enable_zerocopy_send_server": true, 00:19:15.648 "enable_zerocopy_send_client": false, 00:19:15.648 "zerocopy_threshold": 0, 00:19:15.648 "tls_version": 0, 00:19:15.648 "enable_ktls": false 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "vmd", 00:19:15.648 "config": [] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "accel", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "accel_set_options", 00:19:15.648 "params": { 00:19:15.648 "small_cache_size": 128, 00:19:15.648 "large_cache_size": 16, 00:19:15.648 "task_count": 2048, 00:19:15.648 "sequence_count": 2048, 00:19:15.648 "buf_count": 2048 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "bdev", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "bdev_set_options", 00:19:15.648 "params": { 00:19:15.648 "bdev_io_pool_size": 65535, 00:19:15.648 "bdev_io_cache_size": 256, 00:19:15.648 "bdev_auto_examine": true, 00:19:15.648 "iobuf_small_cache_size": 128, 00:19:15.648 "iobuf_large_cache_size": 16 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_raid_set_options", 00:19:15.648 "params": { 00:19:15.648 "process_window_size_kb": 1024, 00:19:15.648 "process_max_bandwidth_mb_sec": 0 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_iscsi_set_options", 00:19:15.648 "params": { 00:19:15.648 "timeout_sec": 30 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_nvme_set_options", 00:19:15.648 "params": { 00:19:15.648 "action_on_timeout": "none", 00:19:15.648 "timeout_us": 0, 00:19:15.648 "timeout_admin_us": 0, 00:19:15.648 "keep_alive_timeout_ms": 10000, 00:19:15.648 "arbitration_burst": 0, 00:19:15.648 "low_priority_weight": 0, 00:19:15.648 "medium_priority_weight": 0, 00:19:15.648 "high_priority_weight": 0, 00:19:15.648 "nvme_adminq_poll_period_us": 10000, 00:19:15.648 "nvme_ioq_poll_period_us": 0, 00:19:15.648 "io_queue_requests": 0, 00:19:15.648 "delay_cmd_submit": true, 00:19:15.648 "transport_retry_count": 4, 00:19:15.648 "bdev_retry_count": 3, 00:19:15.648 "transport_ack_timeout": 0, 00:19:15.648 "ctrlr_loss_timeout_sec": 0, 00:19:15.648 "reconnect_delay_sec": 0, 00:19:15.648 "fast_io_fail_timeout_sec": 0, 00:19:15.648 "disable_auto_failback": false, 00:19:15.648 "generate_uuids": false, 00:19:15.648 "transport_tos": 0, 00:19:15.648 "nvme_error_stat": false, 00:19:15.648 "rdma_srq_size": 0, 00:19:15.648 "io_path_stat": false, 00:19:15.648 "allow_accel_sequence": false, 00:19:15.648 "rdma_max_cq_size": 0, 00:19:15.648 "rdma_cm_event_timeout_ms": 0, 00:19:15.648 "dhchap_digests": [ 00:19:15.648 "sha256", 00:19:15.648 "sha384", 00:19:15.648 "sha512" 00:19:15.648 ], 00:19:15.648 "dhchap_dhgroups": [ 00:19:15.648 "null", 00:19:15.648 "ffdhe2048", 00:19:15.648 "ffdhe3072", 00:19:15.648 "ffdhe4096", 00:19:15.648 "ffdhe6144", 00:19:15.648 "ffdhe8192" 00:19:15.648 ] 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_nvme_set_hotplug", 00:19:15.648 "params": { 00:19:15.648 "period_us": 100000, 00:19:15.648 "enable": false 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_malloc_create", 00:19:15.648 "params": { 00:19:15.648 "name": "malloc0", 00:19:15.648 "num_blocks": 8192, 00:19:15.648 "block_size": 4096, 00:19:15.648 "physical_block_size": 4096, 00:19:15.648 "uuid": "4b17fecd-ed6b-42bb-ae33-bc5f5e5916a8", 00:19:15.648 "optimal_io_boundary": 0, 00:19:15.648 "md_size": 0, 00:19:15.648 "dif_type": 0, 00:19:15.648 "dif_is_head_of_md": false, 00:19:15.648 "dif_pi_format": 0 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "bdev_wait_for_examine" 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "scsi", 00:19:15.648 "config": null 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "scheduler", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "framework_set_scheduler", 00:19:15.648 "params": { 00:19:15.648 "name": "static" 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "vhost_scsi", 00:19:15.648 "config": [] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "vhost_blk", 00:19:15.648 "config": [] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "ublk", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "ublk_create_target", 00:19:15.648 "params": { 00:19:15.648 "cpumask": "1" 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "ublk_start_disk", 00:19:15.648 "params": { 00:19:15.648 "bdev_name": "malloc0", 00:19:15.648 "ublk_id": 0, 00:19:15.648 "num_queues": 1, 00:19:15.648 "queue_depth": 128 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "nbd", 00:19:15.648 "config": [] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "nvmf", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "nvmf_set_config", 00:19:15.648 "params": { 00:19:15.648 "discovery_filter": "match_any", 00:19:15.648 "admin_cmd_passthru": { 00:19:15.648 "identify_ctrlr": false 00:19:15.648 }, 00:19:15.648 "dhchap_digests": [ 00:19:15.648 "sha256", 00:19:15.648 "sha384", 00:19:15.648 "sha512" 00:19:15.648 ], 00:19:15.648 "dhchap_dhgroups": [ 00:19:15.648 "null", 00:19:15.648 "ffdhe2048", 00:19:15.648 "ffdhe3072", 00:19:15.648 "ffdhe4096", 00:19:15.648 "ffdhe6144", 00:19:15.648 "ffdhe8192" 00:19:15.648 ] 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "nvmf_set_max_subsystems", 00:19:15.648 "params": { 00:19:15.648 "max_subsystems": 1024 00:19:15.648 } 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "method": "nvmf_set_crdt", 00:19:15.648 "params": { 00:19:15.648 "crdt1": 0, 00:19:15.648 "crdt2": 0, 00:19:15.648 "crdt3": 0 00:19:15.648 } 00:19:15.648 } 00:19:15.648 ] 00:19:15.648 }, 00:19:15.648 { 00:19:15.648 "subsystem": "iscsi", 00:19:15.648 "config": [ 00:19:15.648 { 00:19:15.648 "method": "iscsi_set_options", 00:19:15.648 "params": { 00:19:15.648 "node_base": "iqn.2016-06.io.spdk", 00:19:15.648 "max_sessions": 128, 00:19:15.648 "max_connections_per_session": 2, 00:19:15.649 "max_queue_depth": 64, 00:19:15.649 "default_time2wait": 2, 00:19:15.649 "default_time2retain": 20, 00:19:15.649 "first_burst_length": 8192, 00:19:15.649 "immediate_data": true, 00:19:15.649 "allow_duplicated_isid": false, 00:19:15.649 "error_recovery_level": 0, 00:19:15.649 "nop_timeout": 60, 00:19:15.649 "nop_in_interval": 30, 00:19:15.649 "disable_chap": false, 00:19:15.649 "require_chap": false, 00:19:15.649 "mutual_chap": false, 00:19:15.649 "chap_group": 0, 00:19:15.649 "max_large_datain_per_connection": 64, 00:19:15.649 "max_r2t_per_connection": 4, 00:19:15.649 "pdu_pool_size": 36864, 00:19:15.649 "immediate_data_pool_size": 16384, 00:19:15.649 "data_out_pool_size": 2048 00:19:15.649 } 00:19:15.649 } 00:19:15.649 ] 00:19:15.649 } 00:19:15.649 ] 00:19:15.649 }' 00:19:15.649 [2024-11-27 12:03:05.448319] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:15.649 [2024-11-27 12:03:05.448457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:19:15.649 [2024-11-27 12:03:05.634533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.909 [2024-11-27 12:03:05.768606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.289 [2024-11-27 12:03:06.953376] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:17.289 [2024-11-27 12:03:06.954621] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:17.289 [2024-11-27 12:03:06.961527] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:17.289 [2024-11-27 12:03:06.961738] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:17.289 [2024-11-27 12:03:06.961785] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:17.289 [2024-11-27 12:03:06.961859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:17.289 [2024-11-27 12:03:06.970514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:17.289 [2024-11-27 12:03:06.970581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:17.289 [2024-11-27 12:03:06.977392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:17.289 [2024-11-27 12:03:06.977492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:17.289 [2024-11-27 12:03:06.994389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:17.289 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.289 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:17.289 12:03:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:17.289 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.289 12:03:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75176 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75176 ']' 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75176 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75176 00:19:17.290 killing process with pid 75176 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75176' 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75176 00:19:17.290 12:03:07 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75176 00:19:19.196 [2024-11-27 12:03:08.733115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:19.196 [2024-11-27 12:03:08.771413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:19.196 [2024-11-27 12:03:08.771538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:19.196 [2024-11-27 12:03:08.779395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:19.196 [2024-11-27 12:03:08.779453] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:19.196 [2024-11-27 12:03:08.779463] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:19.196 [2024-11-27 12:03:08.779491] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:19.196 [2024-11-27 12:03:08.779648] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:20.690 12:03:10 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:20.690 ************************************ 00:19:20.690 END TEST test_save_ublk_config 00:19:20.690 ************************************ 00:19:20.690 00:19:20.690 real 0m10.850s 00:19:20.690 user 0m8.033s 00:19:20.690 sys 0m3.523s 00:19:20.690 12:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.691 12:03:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:20.964 12:03:10 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75269 00:19:20.964 12:03:10 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:20.964 12:03:10 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.964 12:03:10 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75269 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@835 -- # '[' -z 75269 ']' 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.964 12:03:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.964 [2024-11-27 12:03:10.893259] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:20.964 [2024-11-27 12:03:10.893410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75269 ] 00:19:21.223 [2024-11-27 12:03:11.076551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:21.223 [2024-11-27 12:03:11.214238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.223 [2024-11-27 12:03:11.214267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.158 12:03:12 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.158 12:03:12 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:22.158 12:03:12 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:22.158 12:03:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:22.158 12:03:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.158 12:03:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:22.158 ************************************ 00:19:22.158 START TEST test_create_ublk 00:19:22.158 ************************************ 00:19:22.158 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:22.159 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:22.159 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.159 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:22.159 [2024-11-27 12:03:12.064380] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:22.159 [2024-11-27 12:03:12.067282] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:22.159 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.159 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:22.159 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:22.159 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.159 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:22.418 [2024-11-27 12:03:12.361576] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:22.418 [2024-11-27 12:03:12.362048] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:22.418 [2024-11-27 12:03:12.362063] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:22.418 [2024-11-27 12:03:12.362073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:22.418 [2024-11-27 12:03:12.369759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:22.418 [2024-11-27 12:03:12.369785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:22.418 [2024-11-27 12:03:12.377405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:22.418 [2024-11-27 12:03:12.378027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:22.418 [2024-11-27 12:03:12.393432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:22.418 12:03:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:22.418 { 00:19:22.418 "ublk_device": "/dev/ublkb0", 00:19:22.418 "id": 0, 00:19:22.418 "queue_depth": 512, 00:19:22.418 "num_queues": 4, 00:19:22.418 "bdev_name": "Malloc0" 00:19:22.418 } 00:19:22.418 ]' 00:19:22.418 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:22.677 12:03:12 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:22.935 fio: verification read phase will never start because write phase uses all of runtime 00:19:22.935 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:22.935 fio-3.35 00:19:22.935 Starting 1 process 00:19:32.913 00:19:32.913 fio_test: (groupid=0, jobs=1): err= 0: pid=75317: Wed Nov 27 12:03:22 2024 00:19:32.913 write: IOPS=5187, BW=20.3MiB/s (21.2MB/s)(203MiB/10001msec); 0 zone resets 00:19:32.913 clat (usec): min=42, max=8742, avg=191.89, stdev=176.68 00:19:32.913 lat (usec): min=42, max=8769, avg=192.39, stdev=176.74 00:19:32.913 clat percentiles (usec): 00:19:32.913 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:19:32.913 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:19:32.913 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:19:32.913 | 99.00th=[ 227], 99.50th=[ 255], 99.90th=[ 3720], 99.95th=[ 3949], 00:19:32.913 | 99.99th=[ 4293] 00:19:32.913 bw ( KiB/s): min= 9264, max=21824, per=100.00%, avg=20770.11, stdev=2798.93, samples=19 00:19:32.913 iops : min= 2316, max= 5456, avg=5192.53, stdev=699.73, samples=19 00:19:32.913 lat (usec) : 50=0.02%, 100=0.01%, 250=99.45%, 500=0.16%, 750=0.04% 00:19:32.913 lat (usec) : 1000=0.01% 00:19:32.913 lat (msec) : 2=0.07%, 4=0.20%, 10=0.04% 00:19:32.913 cpu : usr=0.87%, sys=3.53%, ctx=51880, majf=0, minf=797 00:19:32.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:32.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.913 issued rwts: total=0,51879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:32.913 00:19:32.913 Run status group 0 (all jobs): 00:19:32.913 WRITE: bw=20.3MiB/s (21.2MB/s), 20.3MiB/s-20.3MiB/s (21.2MB/s-21.2MB/s), io=203MiB (212MB), run=10001-10001msec 00:19:32.913 00:19:32.913 Disk stats (read/write): 00:19:32.913 ublkb0: ios=0/51359, merge=0/0, ticks=0/9447, in_queue=9447, util=99.11% 00:19:32.913 12:03:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:32.913 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.913 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:32.913 [2024-11-27 12:03:22.915452] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:32.913 [2024-11-27 12:03:22.949818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:32.913 [2024-11-27 12:03:22.950773] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:32.913 [2024-11-27 12:03:22.957416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:32.913 [2024-11-27 12:03:22.957878] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:32.913 [2024-11-27 12:03:22.957902] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:32.913 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.914 12:03:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.914 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.173 [2024-11-27 12:03:22.971503] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:33.173 request: 00:19:33.173 { 00:19:33.173 "ublk_id": 0, 00:19:33.173 "method": "ublk_stop_disk", 00:19:33.173 "req_id": 1 00:19:33.173 } 00:19:33.173 Got JSON-RPC error response 00:19:33.173 response: 00:19:33.173 { 00:19:33.173 "code": -19, 00:19:33.173 "message": "No such device" 00:19:33.173 } 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.173 12:03:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.173 [2024-11-27 12:03:22.988501] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:33.173 [2024-11-27 12:03:22.996277] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:33.173 [2024-11-27 12:03:22.996321] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.173 12:03:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.173 12:03:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.741 12:03:23 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:33.741 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:33.741 12:03:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:34.000 ************************************ 00:19:34.000 END TEST test_create_ublk 00:19:34.000 ************************************ 00:19:34.000 12:03:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:34.000 00:19:34.000 real 0m11.767s 00:19:34.000 user 0m0.493s 00:19:34.000 sys 0m0.479s 00:19:34.000 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.000 12:03:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.000 12:03:23 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:34.000 12:03:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:34.000 12:03:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.000 12:03:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.000 ************************************ 00:19:34.000 START TEST test_create_multi_ublk 00:19:34.000 ************************************ 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.000 [2024-11-27 12:03:23.905374] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:34.000 [2024-11-27 12:03:23.908011] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.000 12:03:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.259 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.259 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:34.259 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:34.259 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.259 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.259 [2024-11-27 12:03:24.195582] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:34.259 [2024-11-27 12:03:24.196054] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:34.260 [2024-11-27 12:03:24.196072] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:34.260 [2024-11-27 12:03:24.196086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:34.260 [2024-11-27 12:03:24.203420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:34.260 [2024-11-27 12:03:24.203447] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:34.260 [2024-11-27 12:03:24.211418] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:34.260 [2024-11-27 12:03:24.211997] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:34.260 [2024-11-27 12:03:24.234423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.260 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.519 [2024-11-27 12:03:24.514569] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:34.519 [2024-11-27 12:03:24.515050] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:34.519 [2024-11-27 12:03:24.515071] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:34.519 [2024-11-27 12:03:24.515079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:34.519 [2024-11-27 12:03:24.522442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:34.519 [2024-11-27 12:03:24.522466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:34.519 [2024-11-27 12:03:24.530418] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:34.519 [2024-11-27 12:03:24.530981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:34.519 [2024-11-27 12:03:24.547467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.519 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:34.778 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.778 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.038 [2024-11-27 12:03:24.840499] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:35.038 [2024-11-27 12:03:24.840963] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:35.038 [2024-11-27 12:03:24.840975] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:35.038 [2024-11-27 12:03:24.840985] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:35.038 [2024-11-27 12:03:24.848399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:35.038 [2024-11-27 12:03:24.848426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:35.038 [2024-11-27 12:03:24.856422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:35.038 [2024-11-27 12:03:24.857007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:35.038 [2024-11-27 12:03:24.865478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.038 12:03:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.296 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.296 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:35.296 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:35.296 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.296 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.296 [2024-11-27 12:03:25.158545] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:35.296 [2024-11-27 12:03:25.158982] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:35.296 [2024-11-27 12:03:25.158996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:35.296 [2024-11-27 12:03:25.159004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:35.296 [2024-11-27 12:03:25.166453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:35.297 [2024-11-27 12:03:25.166475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:35.297 [2024-11-27 12:03:25.174443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:35.297 [2024-11-27 12:03:25.175045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:35.297 [2024-11-27 12:03:25.183454] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:35.297 { 00:19:35.297 "ublk_device": "/dev/ublkb0", 00:19:35.297 "id": 0, 00:19:35.297 "queue_depth": 512, 00:19:35.297 "num_queues": 4, 00:19:35.297 "bdev_name": "Malloc0" 00:19:35.297 }, 00:19:35.297 { 00:19:35.297 "ublk_device": "/dev/ublkb1", 00:19:35.297 "id": 1, 00:19:35.297 "queue_depth": 512, 00:19:35.297 "num_queues": 4, 00:19:35.297 "bdev_name": "Malloc1" 00:19:35.297 }, 00:19:35.297 { 00:19:35.297 "ublk_device": "/dev/ublkb2", 00:19:35.297 "id": 2, 00:19:35.297 "queue_depth": 512, 00:19:35.297 "num_queues": 4, 00:19:35.297 "bdev_name": "Malloc2" 00:19:35.297 }, 00:19:35.297 { 00:19:35.297 "ublk_device": "/dev/ublkb3", 00:19:35.297 "id": 3, 00:19:35.297 "queue_depth": 512, 00:19:35.297 "num_queues": 4, 00:19:35.297 "bdev_name": "Malloc3" 00:19:35.297 } 00:19:35.297 ]' 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:35.297 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:35.556 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:35.815 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:36.074 12:03:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.074 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.074 [2024-11-27 12:03:26.071491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:36.074 [2024-11-27 12:03:26.114023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:36.074 [2024-11-27 12:03:26.118873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:36.074 [2024-11-27 12:03:26.125392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:36.074 [2024-11-27 12:03:26.125807] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:36.074 [2024-11-27 12:03:26.125827] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.332 [2024-11-27 12:03:26.131514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:36.332 [2024-11-27 12:03:26.167080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:36.332 [2024-11-27 12:03:26.168728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:36.332 [2024-11-27 12:03:26.173403] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:36.332 [2024-11-27 12:03:26.173790] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:36.332 [2024-11-27 12:03:26.173809] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.332 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 [2024-11-27 12:03:26.188576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:36.333 [2024-11-27 12:03:26.229392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:36.333 [2024-11-27 12:03:26.231034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:36.333 [2024-11-27 12:03:26.237404] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:36.333 [2024-11-27 12:03:26.237791] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:36.333 [2024-11-27 12:03:26.237810] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 [2024-11-27 12:03:26.253511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:36.333 [2024-11-27 12:03:26.287053] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:36.333 [2024-11-27 12:03:26.287875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:36.333 [2024-11-27 12:03:26.293425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:36.333 [2024-11-27 12:03:26.293789] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:36.333 [2024-11-27 12:03:26.293808] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.333 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:36.591 [2024-11-27 12:03:26.485477] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:36.591 [2024-11-27 12:03:26.494385] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:36.591 [2024-11-27 12:03:26.494428] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:36.591 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:36.591 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:36.591 12:03:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:36.592 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.592 12:03:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.159 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.159 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:37.159 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:37.159 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.159 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.726 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.726 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:37.727 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:37.727 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.727 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:37.985 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.985 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:37.985 12:03:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:37.985 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.985 12:03:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.245 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.245 12:03:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:38.245 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:38.245 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.245 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:38.504 ************************************ 00:19:38.504 END TEST test_create_multi_ublk 00:19:38.504 ************************************ 00:19:38.504 12:03:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:38.504 00:19:38.504 real 0m4.514s 00:19:38.504 user 0m1.014s 00:19:38.505 sys 0m0.225s 00:19:38.505 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.505 12:03:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:38.505 12:03:28 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:38.505 12:03:28 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:38.505 12:03:28 ublk -- ublk/ublk.sh@130 -- # killprocess 75269 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@954 -- # '[' -z 75269 ']' 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@958 -- # kill -0 75269 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@959 -- # uname 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75269 00:19:38.505 killing process with pid 75269 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75269' 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@973 -- # kill 75269 00:19:38.505 12:03:28 ublk -- common/autotest_common.sh@978 -- # wait 75269 00:19:39.887 [2024-11-27 12:03:29.604113] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:39.887 [2024-11-27 12:03:29.604184] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:40.826 00:19:40.826 real 0m31.229s 00:19:40.826 user 0m44.906s 00:19:40.826 sys 0m8.842s 00:19:40.826 12:03:30 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.826 12:03:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:40.826 ************************************ 00:19:40.826 END TEST ublk 00:19:40.826 ************************************ 00:19:40.826 12:03:30 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:40.826 12:03:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.826 12:03:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.826 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:19:41.086 ************************************ 00:19:41.086 START TEST ublk_recovery 00:19:41.086 ************************************ 00:19:41.086 12:03:30 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:41.086 * Looking for test storage... 00:19:41.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.086 12:03:31 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.086 --rc genhtml_branch_coverage=1 00:19:41.086 --rc genhtml_function_coverage=1 00:19:41.086 --rc genhtml_legend=1 00:19:41.086 --rc geninfo_all_blocks=1 00:19:41.086 --rc geninfo_unexecuted_blocks=1 00:19:41.086 00:19:41.086 ' 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.086 --rc genhtml_branch_coverage=1 00:19:41.086 --rc genhtml_function_coverage=1 00:19:41.086 --rc genhtml_legend=1 00:19:41.086 --rc geninfo_all_blocks=1 00:19:41.086 --rc geninfo_unexecuted_blocks=1 00:19:41.086 00:19:41.086 ' 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.086 --rc genhtml_branch_coverage=1 00:19:41.086 --rc genhtml_function_coverage=1 00:19:41.086 --rc genhtml_legend=1 00:19:41.086 --rc geninfo_all_blocks=1 00:19:41.086 --rc geninfo_unexecuted_blocks=1 00:19:41.086 00:19:41.086 ' 00:19:41.086 12:03:31 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.086 --rc genhtml_branch_coverage=1 00:19:41.086 --rc genhtml_function_coverage=1 00:19:41.086 --rc genhtml_legend=1 00:19:41.086 --rc geninfo_all_blocks=1 00:19:41.086 --rc geninfo_unexecuted_blocks=1 00:19:41.086 00:19:41.086 ' 00:19:41.086 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:41.086 12:03:31 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:41.086 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:41.086 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75689 00:19:41.087 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:41.087 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.087 12:03:31 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75689 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75689 ']' 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.087 12:03:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.347 [2024-11-27 12:03:31.250168] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:41.347 [2024-11-27 12:03:31.250294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75689 ] 00:19:41.607 [2024-11-27 12:03:31.434229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.607 [2024-11-27 12:03:31.543718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.607 [2024-11-27 12:03:31.543746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:42.543 12:03:32 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.543 [2024-11-27 12:03:32.413404] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:42.543 [2024-11-27 12:03:32.416055] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.543 12:03:32 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.543 malloc0 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.543 12:03:32 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.543 12:03:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.543 [2024-11-27 12:03:32.579618] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:42.543 [2024-11-27 12:03:32.579746] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:42.543 [2024-11-27 12:03:32.579760] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:42.543 [2024-11-27 12:03:32.579771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:42.543 [2024-11-27 12:03:32.588611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:42.543 [2024-11-27 12:03:32.588636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:42.801 [2024-11-27 12:03:32.595425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:42.801 [2024-11-27 12:03:32.595561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:42.801 [2024-11-27 12:03:32.610435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:42.801 1 00:19:42.801 12:03:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.801 12:03:32 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:43.736 12:03:33 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75725 00:19:43.736 12:03:33 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:43.736 12:03:33 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:43.736 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:43.736 fio-3.35 00:19:43.736 Starting 1 process 00:19:49.003 12:03:38 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75689 00:19:49.003 12:03:38 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:54.278 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75689 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:54.278 12:03:43 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75833 00:19:54.278 12:03:43 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:54.278 12:03:43 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.278 12:03:43 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75833 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75833 ']' 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.278 12:03:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.278 [2024-11-27 12:03:43.757111] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:54.278 [2024-11-27 12:03:43.757251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75833 ] 00:19:54.278 [2024-11-27 12:03:43.940864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:54.278 [2024-11-27 12:03:44.052670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.278 [2024-11-27 12:03:44.052696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:55.214 12:03:44 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 [2024-11-27 12:03:44.920380] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:55.214 [2024-11-27 12:03:44.923061] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 12:03:44 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 12:03:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 malloc0 00:19:55.214 12:03:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 12:03:45 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:55.214 12:03:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 12:03:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 [2024-11-27 12:03:45.069569] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:55.214 [2024-11-27 12:03:45.069622] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:55.214 [2024-11-27 12:03:45.069634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:55.214 [2024-11-27 12:03:45.077475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:55.214 [2024-11-27 12:03:45.077498] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:55.214 1 00:19:55.214 12:03:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 12:03:45 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75725 00:19:56.151 [2024-11-27 12:03:46.075913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:56.151 [2024-11-27 12:03:46.082449] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:56.151 [2024-11-27 12:03:46.082468] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:57.088 [2024-11-27 12:03:47.080926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:57.088 [2024-11-27 12:03:47.087430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:57.088 [2024-11-27 12:03:47.087452] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:58.466 [2024-11-27 12:03:48.089429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:58.466 [2024-11-27 12:03:48.096474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:58.466 [2024-11-27 12:03:48.096492] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:58.466 [2024-11-27 12:03:48.096505] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:58.466 [2024-11-27 12:03:48.096602] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:20.565 [2024-11-27 12:04:09.066452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:20.565 [2024-11-27 12:04:09.070961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:20.565 [2024-11-27 12:04:09.080656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:20.565 [2024-11-27 12:04:09.080679] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:47.124 00:20:47.124 fio_test: (groupid=0, jobs=1): err= 0: pid=75728: Wed Nov 27 12:04:33 2024 00:20:47.124 read: IOPS=9607, BW=37.5MiB/s (39.4MB/s)(2252MiB/60002msec) 00:20:47.124 slat (usec): min=3, max=297, avg= 9.76, stdev= 2.55 00:20:47.124 clat (usec): min=1554, max=30461k, avg=6652.70, stdev=323372.47 00:20:47.124 lat (usec): min=1567, max=30461k, avg=6662.47, stdev=323372.47 00:20:47.124 clat percentiles (msec): 00:20:47.124 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:20:47.124 | 30.00th=[ 3], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:47.124 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:47.124 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:20:47.124 | 99.99th=[17113] 00:20:47.124 bw ( KiB/s): min=25488, max=81816, per=100.00%, avg=76983.61, stdev=10117.48, samples=59 00:20:47.124 iops : min= 6372, max=20454, avg=19245.88, stdev=2529.37, samples=59 00:20:47.124 write: IOPS=9594, BW=37.5MiB/s (39.3MB/s)(2249MiB/60002msec); 0 zone resets 00:20:47.124 slat (usec): min=3, max=213, avg= 9.81, stdev= 2.56 00:20:47.124 clat (usec): min=1588, max=30460k, avg=6657.73, stdev=318581.98 00:20:47.124 lat (usec): min=1601, max=30460k, avg=6667.54, stdev=318581.98 00:20:47.124 clat percentiles (msec): 00:20:47.124 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:20:47.124 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:47.124 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:20:47.124 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 10], 00:20:47.124 | 99.99th=[17113] 00:20:47.124 bw ( KiB/s): min=25520, max=82352, per=100.00%, avg=76868.02, stdev=10047.22, samples=59 00:20:47.124 iops : min= 6380, max=20588, avg=19216.98, stdev=2511.81, samples=59 00:20:47.124 lat (msec) : 2=0.01%, 4=93.55%, 10=6.41%, 20=0.02%, >=2000=0.01% 00:20:47.124 cpu : usr=6.93%, sys=18.94%, ctx=53344, majf=0, minf=13 00:20:47.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:47.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:47.124 issued rwts: total=576486,575674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:47.124 00:20:47.124 Run status group 0 (all jobs): 00:20:47.124 READ: bw=37.5MiB/s (39.4MB/s), 37.5MiB/s-37.5MiB/s (39.4MB/s-39.4MB/s), io=2252MiB (2361MB), run=60002-60002msec 00:20:47.124 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=2249MiB (2358MB), run=60002-60002msec 00:20:47.124 00:20:47.124 Disk stats (read/write): 00:20:47.124 ublkb1: ios=574362/573572, merge=0/0, ticks=3767151/3688956, in_queue=7456107, util=99.96% 00:20:47.124 12:04:33 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:47.124 12:04:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.124 12:04:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.124 [2024-11-27 12:04:33.902599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:47.124 [2024-11-27 12:04:33.949541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:47.124 [2024-11-27 12:04:33.949741] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:47.124 [2024-11-27 12:04:33.960436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:47.125 [2024-11-27 12:04:33.960621] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:47.125 [2024-11-27 12:04:33.960633] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.125 12:04:33 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.125 [2024-11-27 12:04:33.976551] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:47.125 [2024-11-27 12:04:33.984407] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:47.125 [2024-11-27 12:04:33.984446] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.125 12:04:33 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:47.125 12:04:33 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:47.125 12:04:33 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75833 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75833 ']' 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75833 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.125 12:04:33 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75833 00:20:47.125 12:04:34 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.125 12:04:34 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.125 12:04:34 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75833' 00:20:47.125 killing process with pid 75833 00:20:47.125 12:04:34 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75833 00:20:47.125 12:04:34 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75833 00:20:47.125 [2024-11-27 12:04:35.592640] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:47.125 [2024-11-27 12:04:35.592713] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:47.125 00:20:47.125 real 1m6.064s 00:20:47.125 user 1m52.810s 00:20:47.125 sys 0m23.968s 00:20:47.125 12:04:36 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.125 12:04:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.125 ************************************ 00:20:47.125 END TEST ublk_recovery 00:20:47.125 ************************************ 00:20:47.125 12:04:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:47.125 12:04:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:47.125 12:04:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.125 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:20:47.125 12:04:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:47.125 12:04:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:47.125 12:04:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.125 12:04:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.125 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:20:47.125 ************************************ 00:20:47.125 START TEST ftl 00:20:47.125 ************************************ 00:20:47.125 12:04:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:47.386 * Looking for test storage... 00:20:47.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.386 12:04:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.386 12:04:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.386 12:04:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.386 12:04:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.386 12:04:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.386 12:04:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:47.386 12:04:37 ftl -- scripts/common.sh@345 -- # : 1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.386 12:04:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.386 12:04:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@353 -- # local d=1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.386 12:04:37 ftl -- scripts/common.sh@355 -- # echo 1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.386 12:04:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@353 -- # local d=2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.386 12:04:37 ftl -- scripts/common.sh@355 -- # echo 2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.386 12:04:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.386 12:04:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.386 12:04:37 ftl -- scripts/common.sh@368 -- # return 0 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.386 --rc genhtml_branch_coverage=1 00:20:47.386 --rc genhtml_function_coverage=1 00:20:47.386 --rc genhtml_legend=1 00:20:47.386 --rc geninfo_all_blocks=1 00:20:47.386 --rc geninfo_unexecuted_blocks=1 00:20:47.386 00:20:47.386 ' 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.386 --rc genhtml_branch_coverage=1 00:20:47.386 --rc genhtml_function_coverage=1 00:20:47.386 --rc genhtml_legend=1 00:20:47.386 --rc geninfo_all_blocks=1 00:20:47.386 --rc geninfo_unexecuted_blocks=1 00:20:47.386 00:20:47.386 ' 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.386 --rc genhtml_branch_coverage=1 00:20:47.386 --rc genhtml_function_coverage=1 00:20:47.386 --rc genhtml_legend=1 00:20:47.386 --rc geninfo_all_blocks=1 00:20:47.386 --rc geninfo_unexecuted_blocks=1 00:20:47.386 00:20:47.386 ' 00:20:47.386 12:04:37 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.386 --rc genhtml_branch_coverage=1 00:20:47.386 --rc genhtml_function_coverage=1 00:20:47.386 --rc genhtml_legend=1 00:20:47.386 --rc geninfo_all_blocks=1 00:20:47.386 --rc geninfo_unexecuted_blocks=1 00:20:47.386 00:20:47.386 ' 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:47.386 12:04:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:47.386 12:04:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.386 12:04:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.386 12:04:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:47.386 12:04:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:47.386 12:04:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.386 12:04:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.386 12:04:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.386 12:04:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.386 12:04:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.386 12:04:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:47.386 12:04:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:47.386 12:04:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.386 12:04:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.386 12:04:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:47.386 12:04:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.386 12:04:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.386 12:04:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.386 12:04:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.386 12:04:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:47.386 12:04:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:47.386 12:04:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.386 12:04:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:47.386 12:04:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:47.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.215 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.215 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.215 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.215 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.215 12:04:38 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76643 00:20:48.215 12:04:38 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:48.215 12:04:38 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76643 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 76643 ']' 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.215 12:04:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:48.475 [2024-11-27 12:04:38.364264] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:48.475 [2024-11-27 12:04:38.364421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76643 ] 00:20:48.734 [2024-11-27 12:04:38.545715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.734 [2024-11-27 12:04:38.652229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.302 12:04:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.302 12:04:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:49.302 12:04:39 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:49.302 12:04:39 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:50.684 12:04:40 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:50.684 12:04:40 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:50.942 12:04:40 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:50.942 12:04:40 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:50.942 12:04:40 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:51.201 12:04:41 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:51.201 12:04:41 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:51.201 12:04:41 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:51.201 12:04:41 ftl -- ftl/ftl.sh@50 -- # break 00:20:51.202 12:04:41 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:51.202 12:04:41 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:51.202 12:04:41 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:51.202 12:04:41 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:51.461 12:04:41 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:51.461 12:04:41 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:51.461 12:04:41 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:51.461 12:04:41 ftl -- ftl/ftl.sh@63 -- # break 00:20:51.461 12:04:41 ftl -- ftl/ftl.sh@66 -- # killprocess 76643 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@958 -- # kill -0 76643 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@959 -- # uname 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76643 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.461 killing process with pid 76643 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76643' 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@973 -- # kill 76643 00:20:51.461 12:04:41 ftl -- common/autotest_common.sh@978 -- # wait 76643 00:20:53.994 12:04:43 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:53.994 12:04:43 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:53.994 12:04:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:53.994 12:04:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.994 12:04:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:53.994 ************************************ 00:20:53.994 START TEST ftl_fio_basic 00:20:53.994 ************************************ 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:53.994 * Looking for test storage... 00:20:53.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:53.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.994 --rc genhtml_branch_coverage=1 00:20:53.994 --rc genhtml_function_coverage=1 00:20:53.994 --rc genhtml_legend=1 00:20:53.994 --rc geninfo_all_blocks=1 00:20:53.994 --rc geninfo_unexecuted_blocks=1 00:20:53.994 00:20:53.994 ' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:53.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.994 --rc genhtml_branch_coverage=1 00:20:53.994 --rc genhtml_function_coverage=1 00:20:53.994 --rc genhtml_legend=1 00:20:53.994 --rc geninfo_all_blocks=1 00:20:53.994 --rc geninfo_unexecuted_blocks=1 00:20:53.994 00:20:53.994 ' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:53.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.994 --rc genhtml_branch_coverage=1 00:20:53.994 --rc genhtml_function_coverage=1 00:20:53.994 --rc genhtml_legend=1 00:20:53.994 --rc geninfo_all_blocks=1 00:20:53.994 --rc geninfo_unexecuted_blocks=1 00:20:53.994 00:20:53.994 ' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:53.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.994 --rc genhtml_branch_coverage=1 00:20:53.994 --rc genhtml_function_coverage=1 00:20:53.994 --rc genhtml_legend=1 00:20:53.994 --rc geninfo_all_blocks=1 00:20:53.994 --rc geninfo_unexecuted_blocks=1 00:20:53.994 00:20:53.994 ' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76786 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76786 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76786 ']' 00:20:53.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:53.994 12:04:43 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:53.994 [2024-11-27 12:04:43.973621] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:53.994 [2024-11-27 12:04:43.973929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76786 ] 00:20:54.253 [2024-11-27 12:04:44.153614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:54.253 [2024-11-27 12:04:44.266865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.253 [2024-11-27 12:04:44.267024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.253 [2024-11-27 12:04:44.267074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:55.187 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:55.446 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:55.706 { 00:20:55.706 "name": "nvme0n1", 00:20:55.706 "aliases": [ 00:20:55.706 "3a50a8bb-f837-4ea3-bd2b-cf0c9e671a82" 00:20:55.706 ], 00:20:55.706 "product_name": "NVMe disk", 00:20:55.706 "block_size": 4096, 00:20:55.706 "num_blocks": 1310720, 00:20:55.706 "uuid": "3a50a8bb-f837-4ea3-bd2b-cf0c9e671a82", 00:20:55.706 "numa_id": -1, 00:20:55.706 "assigned_rate_limits": { 00:20:55.706 "rw_ios_per_sec": 0, 00:20:55.706 "rw_mbytes_per_sec": 0, 00:20:55.706 "r_mbytes_per_sec": 0, 00:20:55.706 "w_mbytes_per_sec": 0 00:20:55.706 }, 00:20:55.706 "claimed": false, 00:20:55.706 "zoned": false, 00:20:55.706 "supported_io_types": { 00:20:55.706 "read": true, 00:20:55.706 "write": true, 00:20:55.706 "unmap": true, 00:20:55.706 "flush": true, 00:20:55.706 "reset": true, 00:20:55.706 "nvme_admin": true, 00:20:55.706 "nvme_io": true, 00:20:55.706 "nvme_io_md": false, 00:20:55.706 "write_zeroes": true, 00:20:55.706 "zcopy": false, 00:20:55.706 "get_zone_info": false, 00:20:55.706 "zone_management": false, 00:20:55.706 "zone_append": false, 00:20:55.706 "compare": true, 00:20:55.706 "compare_and_write": false, 00:20:55.706 "abort": true, 00:20:55.706 "seek_hole": false, 00:20:55.706 "seek_data": false, 00:20:55.706 "copy": true, 00:20:55.706 "nvme_iov_md": false 00:20:55.706 }, 00:20:55.706 "driver_specific": { 00:20:55.706 "nvme": [ 00:20:55.706 { 00:20:55.706 "pci_address": "0000:00:11.0", 00:20:55.706 "trid": { 00:20:55.706 "trtype": "PCIe", 00:20:55.706 "traddr": "0000:00:11.0" 00:20:55.706 }, 00:20:55.706 "ctrlr_data": { 00:20:55.706 "cntlid": 0, 00:20:55.706 "vendor_id": "0x1b36", 00:20:55.706 "model_number": "QEMU NVMe Ctrl", 00:20:55.706 "serial_number": "12341", 00:20:55.706 "firmware_revision": "8.0.0", 00:20:55.706 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:55.706 "oacs": { 00:20:55.706 "security": 0, 00:20:55.706 "format": 1, 00:20:55.706 "firmware": 0, 00:20:55.706 "ns_manage": 1 00:20:55.706 }, 00:20:55.706 "multi_ctrlr": false, 00:20:55.706 "ana_reporting": false 00:20:55.706 }, 00:20:55.706 "vs": { 00:20:55.706 "nvme_version": "1.4" 00:20:55.706 }, 00:20:55.706 "ns_data": { 00:20:55.706 "id": 1, 00:20:55.706 "can_share": false 00:20:55.706 } 00:20:55.706 } 00:20:55.706 ], 00:20:55.706 "mp_policy": "active_passive" 00:20:55.706 } 00:20:55.706 } 00:20:55.706 ]' 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:55.706 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:55.966 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:55.966 12:04:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:56.225 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8d31535f-a50b-4530-8571-8025796c7697 00:20:56.225 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8d31535f-a50b-4530-8571-8025796c7697 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b55895a6-0550-4264-8522-53075604377a 00:20:56.483 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:56.483 { 00:20:56.483 "name": "b55895a6-0550-4264-8522-53075604377a", 00:20:56.483 "aliases": [ 00:20:56.483 "lvs/nvme0n1p0" 00:20:56.483 ], 00:20:56.483 "product_name": "Logical Volume", 00:20:56.483 "block_size": 4096, 00:20:56.483 "num_blocks": 26476544, 00:20:56.483 "uuid": "b55895a6-0550-4264-8522-53075604377a", 00:20:56.483 "assigned_rate_limits": { 00:20:56.483 "rw_ios_per_sec": 0, 00:20:56.483 "rw_mbytes_per_sec": 0, 00:20:56.483 "r_mbytes_per_sec": 0, 00:20:56.483 "w_mbytes_per_sec": 0 00:20:56.483 }, 00:20:56.483 "claimed": false, 00:20:56.483 "zoned": false, 00:20:56.483 "supported_io_types": { 00:20:56.483 "read": true, 00:20:56.483 "write": true, 00:20:56.483 "unmap": true, 00:20:56.483 "flush": false, 00:20:56.483 "reset": true, 00:20:56.483 "nvme_admin": false, 00:20:56.483 "nvme_io": false, 00:20:56.483 "nvme_io_md": false, 00:20:56.483 "write_zeroes": true, 00:20:56.483 "zcopy": false, 00:20:56.483 "get_zone_info": false, 00:20:56.483 "zone_management": false, 00:20:56.483 "zone_append": false, 00:20:56.483 "compare": false, 00:20:56.483 "compare_and_write": false, 00:20:56.483 "abort": false, 00:20:56.483 "seek_hole": true, 00:20:56.483 "seek_data": true, 00:20:56.483 "copy": false, 00:20:56.483 "nvme_iov_md": false 00:20:56.483 }, 00:20:56.483 "driver_specific": { 00:20:56.483 "lvol": { 00:20:56.483 "lvol_store_uuid": "8d31535f-a50b-4530-8571-8025796c7697", 00:20:56.483 "base_bdev": "nvme0n1", 00:20:56.483 "thin_provision": true, 00:20:56.483 "num_allocated_clusters": 0, 00:20:56.483 "snapshot": false, 00:20:56.483 "clone": false, 00:20:56.483 "esnap_clone": false 00:20:56.483 } 00:20:56.483 } 00:20:56.483 } 00:20:56.483 ]' 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:56.742 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b55895a6-0550-4264-8522-53075604377a 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b55895a6-0550-4264-8522-53075604377a 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:57.001 12:04:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b55895a6-0550-4264-8522-53075604377a 00:20:57.259 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.260 { 00:20:57.260 "name": "b55895a6-0550-4264-8522-53075604377a", 00:20:57.260 "aliases": [ 00:20:57.260 "lvs/nvme0n1p0" 00:20:57.260 ], 00:20:57.260 "product_name": "Logical Volume", 00:20:57.260 "block_size": 4096, 00:20:57.260 "num_blocks": 26476544, 00:20:57.260 "uuid": "b55895a6-0550-4264-8522-53075604377a", 00:20:57.260 "assigned_rate_limits": { 00:20:57.260 "rw_ios_per_sec": 0, 00:20:57.260 "rw_mbytes_per_sec": 0, 00:20:57.260 "r_mbytes_per_sec": 0, 00:20:57.260 "w_mbytes_per_sec": 0 00:20:57.260 }, 00:20:57.260 "claimed": false, 00:20:57.260 "zoned": false, 00:20:57.260 "supported_io_types": { 00:20:57.260 "read": true, 00:20:57.260 "write": true, 00:20:57.260 "unmap": true, 00:20:57.260 "flush": false, 00:20:57.260 "reset": true, 00:20:57.260 "nvme_admin": false, 00:20:57.260 "nvme_io": false, 00:20:57.260 "nvme_io_md": false, 00:20:57.260 "write_zeroes": true, 00:20:57.260 "zcopy": false, 00:20:57.260 "get_zone_info": false, 00:20:57.260 "zone_management": false, 00:20:57.260 "zone_append": false, 00:20:57.260 "compare": false, 00:20:57.260 "compare_and_write": false, 00:20:57.260 "abort": false, 00:20:57.260 "seek_hole": true, 00:20:57.260 "seek_data": true, 00:20:57.260 "copy": false, 00:20:57.260 "nvme_iov_md": false 00:20:57.260 }, 00:20:57.260 "driver_specific": { 00:20:57.260 "lvol": { 00:20:57.260 "lvol_store_uuid": "8d31535f-a50b-4530-8571-8025796c7697", 00:20:57.260 "base_bdev": "nvme0n1", 00:20:57.260 "thin_provision": true, 00:20:57.260 "num_allocated_clusters": 0, 00:20:57.260 "snapshot": false, 00:20:57.260 "clone": false, 00:20:57.260 "esnap_clone": false 00:20:57.260 } 00:20:57.260 } 00:20:57.260 } 00:20:57.260 ]' 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:57.260 12:04:47 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:57.519 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b55895a6-0550-4264-8522-53075604377a 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b55895a6-0550-4264-8522-53075604377a 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:57.519 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b55895a6-0550-4264-8522-53075604377a 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.779 { 00:20:57.779 "name": "b55895a6-0550-4264-8522-53075604377a", 00:20:57.779 "aliases": [ 00:20:57.779 "lvs/nvme0n1p0" 00:20:57.779 ], 00:20:57.779 "product_name": "Logical Volume", 00:20:57.779 "block_size": 4096, 00:20:57.779 "num_blocks": 26476544, 00:20:57.779 "uuid": "b55895a6-0550-4264-8522-53075604377a", 00:20:57.779 "assigned_rate_limits": { 00:20:57.779 "rw_ios_per_sec": 0, 00:20:57.779 "rw_mbytes_per_sec": 0, 00:20:57.779 "r_mbytes_per_sec": 0, 00:20:57.779 "w_mbytes_per_sec": 0 00:20:57.779 }, 00:20:57.779 "claimed": false, 00:20:57.779 "zoned": false, 00:20:57.779 "supported_io_types": { 00:20:57.779 "read": true, 00:20:57.779 "write": true, 00:20:57.779 "unmap": true, 00:20:57.779 "flush": false, 00:20:57.779 "reset": true, 00:20:57.779 "nvme_admin": false, 00:20:57.779 "nvme_io": false, 00:20:57.779 "nvme_io_md": false, 00:20:57.779 "write_zeroes": true, 00:20:57.779 "zcopy": false, 00:20:57.779 "get_zone_info": false, 00:20:57.779 "zone_management": false, 00:20:57.779 "zone_append": false, 00:20:57.779 "compare": false, 00:20:57.779 "compare_and_write": false, 00:20:57.779 "abort": false, 00:20:57.779 "seek_hole": true, 00:20:57.779 "seek_data": true, 00:20:57.779 "copy": false, 00:20:57.779 "nvme_iov_md": false 00:20:57.779 }, 00:20:57.779 "driver_specific": { 00:20:57.779 "lvol": { 00:20:57.779 "lvol_store_uuid": "8d31535f-a50b-4530-8571-8025796c7697", 00:20:57.779 "base_bdev": "nvme0n1", 00:20:57.779 "thin_provision": true, 00:20:57.779 "num_allocated_clusters": 0, 00:20:57.779 "snapshot": false, 00:20:57.779 "clone": false, 00:20:57.779 "esnap_clone": false 00:20:57.779 } 00:20:57.779 } 00:20:57.779 } 00:20:57.779 ]' 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:57.779 12:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b55895a6-0550-4264-8522-53075604377a -c nvc0n1p0 --l2p_dram_limit 60 00:20:58.040 [2024-11-27 12:04:47.844064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.844262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.040 [2024-11-27 12:04:47.844464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:58.040 [2024-11-27 12:04:47.844508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.844674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.844693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.040 [2024-11-27 12:04:47.844711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:58.040 [2024-11-27 12:04:47.844722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.844782] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.040 [2024-11-27 12:04:47.845728] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.040 [2024-11-27 12:04:47.845758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.845772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.040 [2024-11-27 12:04:47.845786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:20:58.040 [2024-11-27 12:04:47.845796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.845899] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 99631124-c838-4dc1-b9ba-c9460ea8f2fe 00:20:58.040 [2024-11-27 12:04:47.847433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.847473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:58.040 [2024-11-27 12:04:47.847497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:58.040 [2024-11-27 12:04:47.847511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.855260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.855459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.040 [2024-11-27 12:04:47.855480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.628 ms 00:20:58.040 [2024-11-27 12:04:47.855500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.855634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.855651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.040 [2024-11-27 12:04:47.855663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:58.040 [2024-11-27 12:04:47.855680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.855782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.855798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.040 [2024-11-27 12:04:47.855810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:58.040 [2024-11-27 12:04:47.855823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.855876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.040 [2024-11-27 12:04:47.861198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.861243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.040 [2024-11-27 12:04:47.861264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.335 ms 00:20:58.040 [2024-11-27 12:04:47.861274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.861344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.861369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.040 [2024-11-27 12:04:47.861399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:58.040 [2024-11-27 12:04:47.861410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.861489] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:58.040 [2024-11-27 12:04:47.861647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.040 [2024-11-27 12:04:47.861676] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.040 [2024-11-27 12:04:47.861689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.040 [2024-11-27 12:04:47.861714] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.040 [2024-11-27 12:04:47.861727] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.040 [2024-11-27 12:04:47.861759] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.040 [2024-11-27 12:04:47.861770] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.040 [2024-11-27 12:04:47.861785] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.040 [2024-11-27 12:04:47.861797] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.040 [2024-11-27 12:04:47.861820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.861831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.040 [2024-11-27 12:04:47.861849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:58.040 [2024-11-27 12:04:47.861860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.861965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.040 [2024-11-27 12:04:47.861978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.040 [2024-11-27 12:04:47.861993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:58.040 [2024-11-27 12:04:47.862004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.040 [2024-11-27 12:04:47.862167] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.040 [2024-11-27 12:04:47.862185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.040 [2024-11-27 12:04:47.862201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.040 [2024-11-27 12:04:47.862213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.040 [2024-11-27 12:04:47.862239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.040 [2024-11-27 12:04:47.862265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.040 [2024-11-27 12:04:47.862278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.040 [2024-11-27 12:04:47.862301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.040 [2024-11-27 12:04:47.862314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.040 [2024-11-27 12:04:47.862326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.040 [2024-11-27 12:04:47.862337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.040 [2024-11-27 12:04:47.862349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.040 [2024-11-27 12:04:47.862372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.040 [2024-11-27 12:04:47.862398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.040 [2024-11-27 12:04:47.862410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.040 [2024-11-27 12:04:47.862432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.040 [2024-11-27 12:04:47.862442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.040 [2024-11-27 12:04:47.862453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.040 [2024-11-27 12:04:47.862463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.041 [2024-11-27 12:04:47.862485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.041 [2024-11-27 12:04:47.862497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.041 [2024-11-27 12:04:47.862518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.041 [2024-11-27 12:04:47.862527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.041 [2024-11-27 12:04:47.862547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.041 [2024-11-27 12:04:47.862561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.041 [2024-11-27 12:04:47.862599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.041 [2024-11-27 12:04:47.862608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.041 [2024-11-27 12:04:47.862620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.041 [2024-11-27 12:04:47.862629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.041 [2024-11-27 12:04:47.862640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.041 [2024-11-27 12:04:47.862649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.041 [2024-11-27 12:04:47.862670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.041 [2024-11-27 12:04:47.862684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862696] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.041 [2024-11-27 12:04:47.862708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.041 [2024-11-27 12:04:47.862718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.041 [2024-11-27 12:04:47.862730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.041 [2024-11-27 12:04:47.862742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.041 [2024-11-27 12:04:47.862756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.041 [2024-11-27 12:04:47.862765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.041 [2024-11-27 12:04:47.862778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.041 [2024-11-27 12:04:47.862788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.041 [2024-11-27 12:04:47.862800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.041 [2024-11-27 12:04:47.862815] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.041 [2024-11-27 12:04:47.862830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.862842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.041 [2024-11-27 12:04:47.862855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.041 [2024-11-27 12:04:47.862865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.041 [2024-11-27 12:04:47.862878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.041 [2024-11-27 12:04:47.862889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.041 [2024-11-27 12:04:47.862901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.041 [2024-11-27 12:04:47.862911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.041 [2024-11-27 12:04:47.862924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.041 [2024-11-27 12:04:47.862935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.041 [2024-11-27 12:04:47.862950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.862960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.862974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.862984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.862997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.041 [2024-11-27 12:04:47.863007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.041 [2024-11-27 12:04:47.863024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.863035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.041 [2024-11-27 12:04:47.863049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.041 [2024-11-27 12:04:47.863059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.041 [2024-11-27 12:04:47.863072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.041 [2024-11-27 12:04:47.863084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.041 [2024-11-27 12:04:47.863097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.041 [2024-11-27 12:04:47.863107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:20:58.041 [2024-11-27 12:04:47.863119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.041 [2024-11-27 12:04:47.863254] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:58.041 [2024-11-27 12:04:47.863273] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:03.316 [2024-11-27 12:04:52.816603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.816666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:03.317 [2024-11-27 12:04:52.816683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4961.389 ms 00:21:03.317 [2024-11-27 12:04:52.816695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.855087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.855138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.317 [2024-11-27 12:04:52.855154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.133 ms 00:21:03.317 [2024-11-27 12:04:52.855167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.855330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.855347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:03.317 [2024-11-27 12:04:52.855374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:03.317 [2024-11-27 12:04:52.855390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.911254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.911303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.317 [2024-11-27 12:04:52.911316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.866 ms 00:21:03.317 [2024-11-27 12:04:52.911330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.911421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.911437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.317 [2024-11-27 12:04:52.911448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:03.317 [2024-11-27 12:04:52.911461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.911992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.912017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.317 [2024-11-27 12:04:52.912033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:21:03.317 [2024-11-27 12:04:52.912046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.912198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.912217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.317 [2024-11-27 12:04:52.912229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:03.317 [2024-11-27 12:04:52.912244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.932473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.932513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.317 [2024-11-27 12:04:52.932527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.217 ms 00:21:03.317 [2024-11-27 12:04:52.932540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:52.944495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:03.317 [2024-11-27 12:04:52.960746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:52.960786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:03.317 [2024-11-27 12:04:52.960806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.086 ms 00:21:03.317 [2024-11-27 12:04:52.960816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.060309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.060387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:03.317 [2024-11-27 12:04:53.060428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.582 ms 00:21:03.317 [2024-11-27 12:04:53.060439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.060650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.060665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:03.317 [2024-11-27 12:04:53.060699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:21:03.317 [2024-11-27 12:04:53.060709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.097625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.097795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:03.317 [2024-11-27 12:04:53.097822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.881 ms 00:21:03.317 [2024-11-27 12:04:53.097833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.133278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.133454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:03.317 [2024-11-27 12:04:53.133482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.365 ms 00:21:03.317 [2024-11-27 12:04:53.133493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.134251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.134273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.317 [2024-11-27 12:04:53.134287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:21:03.317 [2024-11-27 12:04:53.134298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.265816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.265856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:03.317 [2024-11-27 12:04:53.265879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 131.635 ms 00:21:03.317 [2024-11-27 12:04:53.265890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.302141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.302178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:03.317 [2024-11-27 12:04:53.302195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.173 ms 00:21:03.317 [2024-11-27 12:04:53.302206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.317 [2024-11-27 12:04:53.337184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.317 [2024-11-27 12:04:53.337216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:03.317 [2024-11-27 12:04:53.337232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.956 ms 00:21:03.317 [2024-11-27 12:04:53.337241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.577 [2024-11-27 12:04:53.373525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.577 [2024-11-27 12:04:53.373587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.577 [2024-11-27 12:04:53.373604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.274 ms 00:21:03.577 [2024-11-27 12:04:53.373615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.577 [2024-11-27 12:04:53.373688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.577 [2024-11-27 12:04:53.373700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.577 [2024-11-27 12:04:53.373727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:03.577 [2024-11-27 12:04:53.373753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.577 [2024-11-27 12:04:53.373888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.577 [2024-11-27 12:04:53.373903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.577 [2024-11-27 12:04:53.373917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:03.577 [2024-11-27 12:04:53.373928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.577 [2024-11-27 12:04:53.375111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5539.568 ms, result 0 00:21:03.577 { 00:21:03.577 "name": "ftl0", 00:21:03.577 "uuid": "99631124-c838-4dc1-b9ba-c9460ea8f2fe" 00:21:03.577 } 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:03.577 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:03.836 [ 00:21:03.836 { 00:21:03.836 "name": "ftl0", 00:21:03.836 "aliases": [ 00:21:03.836 "99631124-c838-4dc1-b9ba-c9460ea8f2fe" 00:21:03.836 ], 00:21:03.836 "product_name": "FTL disk", 00:21:03.836 "block_size": 4096, 00:21:03.836 "num_blocks": 20971520, 00:21:03.836 "uuid": "99631124-c838-4dc1-b9ba-c9460ea8f2fe", 00:21:03.836 "assigned_rate_limits": { 00:21:03.836 "rw_ios_per_sec": 0, 00:21:03.836 "rw_mbytes_per_sec": 0, 00:21:03.836 "r_mbytes_per_sec": 0, 00:21:03.836 "w_mbytes_per_sec": 0 00:21:03.836 }, 00:21:03.836 "claimed": false, 00:21:03.836 "zoned": false, 00:21:03.836 "supported_io_types": { 00:21:03.836 "read": true, 00:21:03.836 "write": true, 00:21:03.836 "unmap": true, 00:21:03.836 "flush": true, 00:21:03.836 "reset": false, 00:21:03.836 "nvme_admin": false, 00:21:03.836 "nvme_io": false, 00:21:03.836 "nvme_io_md": false, 00:21:03.836 "write_zeroes": true, 00:21:03.836 "zcopy": false, 00:21:03.836 "get_zone_info": false, 00:21:03.836 "zone_management": false, 00:21:03.836 "zone_append": false, 00:21:03.836 "compare": false, 00:21:03.836 "compare_and_write": false, 00:21:03.836 "abort": false, 00:21:03.836 "seek_hole": false, 00:21:03.836 "seek_data": false, 00:21:03.836 "copy": false, 00:21:03.836 "nvme_iov_md": false 00:21:03.836 }, 00:21:03.836 "driver_specific": { 00:21:03.836 "ftl": { 00:21:03.836 "base_bdev": "b55895a6-0550-4264-8522-53075604377a", 00:21:03.836 "cache": "nvc0n1p0" 00:21:03.836 } 00:21:03.836 } 00:21:03.836 } 00:21:03.836 ] 00:21:03.836 12:04:53 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:03.836 12:04:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:03.836 12:04:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:04.095 12:04:54 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:04.095 12:04:54 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:04.355 [2024-11-27 12:04:54.176567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.176616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.355 [2024-11-27 12:04:54.176630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.355 [2024-11-27 12:04:54.176646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.176697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.355 [2024-11-27 12:04:54.180877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.183654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.355 [2024-11-27 12:04:54.183688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:21:04.355 [2024-11-27 12:04:54.183700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.184676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.184699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.355 [2024-11-27 12:04:54.184714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:21:04.355 [2024-11-27 12:04:54.184724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.187306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.187436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.355 [2024-11-27 12:04:54.187477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:21:04.355 [2024-11-27 12:04:54.187487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.192474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.192503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:04.355 [2024-11-27 12:04:54.192519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:21:04.355 [2024-11-27 12:04:54.192529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.228671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.228706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.355 [2024-11-27 12:04:54.228755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.071 ms 00:21:04.355 [2024-11-27 12:04:54.228765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.249821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.249951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.355 [2024-11-27 12:04:54.249997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.019 ms 00:21:04.355 [2024-11-27 12:04:54.250008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.250395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.250414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.355 [2024-11-27 12:04:54.250429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:04.355 [2024-11-27 12:04:54.250439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.355 [2024-11-27 12:04:54.285368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.355 [2024-11-27 12:04:54.285399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:04.356 [2024-11-27 12:04:54.285415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.925 ms 00:21:04.356 [2024-11-27 12:04:54.285440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.356 [2024-11-27 12:04:54.320876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.356 [2024-11-27 12:04:54.320908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:04.356 [2024-11-27 12:04:54.320923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.419 ms 00:21:04.356 [2024-11-27 12:04:54.320948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.356 [2024-11-27 12:04:54.355159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.356 [2024-11-27 12:04:54.355192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.356 [2024-11-27 12:04:54.355208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.190 ms 00:21:04.356 [2024-11-27 12:04:54.355234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.356 [2024-11-27 12:04:54.390313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.356 [2024-11-27 12:04:54.390347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.356 [2024-11-27 12:04:54.390389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.918 ms 00:21:04.356 [2024-11-27 12:04:54.390400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.356 [2024-11-27 12:04:54.390473] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.356 [2024-11-27 12:04:54.390489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.390992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.356 [2024-11-27 12:04:54.391136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.357 [2024-11-27 12:04:54.391737] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.357 [2024-11-27 12:04:54.391750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99631124-c838-4dc1-b9ba-c9460ea8f2fe 00:21:04.357 [2024-11-27 12:04:54.391761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.357 [2024-11-27 12:04:54.391775] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.357 [2024-11-27 12:04:54.391787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.357 [2024-11-27 12:04:54.391800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.357 [2024-11-27 12:04:54.391810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.357 [2024-11-27 12:04:54.391823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.357 [2024-11-27 12:04:54.391833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.357 [2024-11-27 12:04:54.391844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.357 [2024-11-27 12:04:54.391853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.357 [2024-11-27 12:04:54.391865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.357 [2024-11-27 12:04:54.391876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.357 [2024-11-27 12:04:54.391889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:21:04.357 [2024-11-27 12:04:54.391899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.411648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.617 [2024-11-27 12:04:54.411680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.617 [2024-11-27 12:04:54.411696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.703 ms 00:21:04.617 [2024-11-27 12:04:54.411706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.412223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.617 [2024-11-27 12:04:54.412238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.617 [2024-11-27 12:04:54.412252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:21:04.617 [2024-11-27 12:04:54.412262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.479483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.617 [2024-11-27 12:04:54.479518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.617 [2024-11-27 12:04:54.479533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.617 [2024-11-27 12:04:54.479559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.479635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.617 [2024-11-27 12:04:54.479647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.617 [2024-11-27 12:04:54.479660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.617 [2024-11-27 12:04:54.479670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.479805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.617 [2024-11-27 12:04:54.479822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.617 [2024-11-27 12:04:54.479837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.617 [2024-11-27 12:04:54.479847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.479897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.617 [2024-11-27 12:04:54.479907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.617 [2024-11-27 12:04:54.479920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.617 [2024-11-27 12:04:54.479930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-11-27 12:04:54.606679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.617 [2024-11-27 12:04:54.606910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.617 [2024-11-27 12:04:54.606955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.617 [2024-11-27 12:04:54.606966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.706354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.706429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.877 [2024-11-27 12:04:54.706446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.706457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.706609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.706623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.877 [2024-11-27 12:04:54.706640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.706651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.706765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.706777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.877 [2024-11-27 12:04:54.706790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.706800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.706960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.706974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.877 [2024-11-27 12:04:54.706990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.707000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.707074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.707086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.877 [2024-11-27 12:04:54.707100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.707110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.707190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.707202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.877 [2024-11-27 12:04:54.707215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.707228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.707309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.877 [2024-11-27 12:04:54.707321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.877 [2024-11-27 12:04:54.707334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.877 [2024-11-27 12:04:54.707344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.877 [2024-11-27 12:04:54.707619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.891 ms, result 0 00:21:04.877 true 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76786 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76786 ']' 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76786 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76786 00:21:04.877 killing process with pid 76786 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76786' 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76786 00:21:04.877 12:04:54 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76786 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.154 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.155 12:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:10.155 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:10.155 fio-3.35 00:21:10.155 Starting 1 thread 00:21:16.725 00:21:16.725 test: (groupid=0, jobs=1): err= 0: pid=77016: Wed Nov 27 12:05:05 2024 00:21:16.725 read: IOPS=872, BW=58.0MiB/s (60.8MB/s)(255MiB/4392msec) 00:21:16.725 slat (nsec): min=4516, max=57510, avg=11571.81, stdev=4201.91 00:21:16.725 clat (usec): min=345, max=7746, avg=521.73, stdev=129.21 00:21:16.725 lat (usec): min=359, max=7760, avg=533.30, stdev=129.54 00:21:16.725 clat percentiles (usec): 00:21:16.725 | 1.00th=[ 416], 5.00th=[ 457], 10.00th=[ 474], 20.00th=[ 482], 00:21:16.725 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 529], 00:21:16.725 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 586], 00:21:16.725 | 99.00th=[ 676], 99.50th=[ 742], 99.90th=[ 889], 99.95th=[ 2245], 00:21:16.725 | 99.99th=[ 7767] 00:21:16.725 write: IOPS=878, BW=58.4MiB/s (61.2MB/s)(256MiB/4387msec); 0 zone resets 00:21:16.725 slat (nsec): min=16152, max=90916, avg=23425.37, stdev=5206.31 00:21:16.725 clat (usec): min=419, max=1246, avg=574.67, stdev=64.68 00:21:16.725 lat (usec): min=441, max=1264, avg=598.09, stdev=65.03 00:21:16.725 clat percentiles (usec): 00:21:16.725 | 1.00th=[ 474], 5.00th=[ 494], 10.00th=[ 506], 20.00th=[ 537], 00:21:16.725 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:21:16.725 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 627], 95.00th=[ 652], 00:21:16.725 | 99.00th=[ 873], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 1004], 00:21:16.725 | 99.99th=[ 1254] 00:21:16.725 bw ( KiB/s): min=59024, max=61744, per=100.00%, avg=59891.00, stdev=999.23, samples=8 00:21:16.725 iops : min= 868, max= 908, avg=880.75, stdev=14.69, samples=8 00:21:16.725 lat (usec) : 500=23.01%, 750=75.65%, 1000=1.29% 00:21:16.725 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:21:16.725 cpu : usr=99.18%, sys=0.11%, ctx=7, majf=0, minf=1169 00:21:16.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:16.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.725 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:16.725 00:21:16.725 Run status group 0 (all jobs): 00:21:16.725 READ: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=255MiB (267MB), run=4392-4392msec 00:21:16.725 WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=256MiB (269MB), run=4387-4387msec 00:21:17.661 ----------------------------------------------------- 00:21:17.661 Suppressions used: 00:21:17.661 count bytes template 00:21:17.661 1 5 /usr/src/fio/parse.c 00:21:17.661 1 8 libtcmalloc_minimal.so 00:21:17.661 1 904 libcrypto.so 00:21:17.661 ----------------------------------------------------- 00:21:17.661 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:17.661 12:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:17.965 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:17.965 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:17.965 fio-3.35 00:21:17.965 Starting 2 threads 00:21:50.118 00:21:50.118 first_half: (groupid=0, jobs=1): err= 0: pid=77128: Wed Nov 27 12:05:36 2024 00:21:50.118 read: IOPS=2387, BW=9551KiB/s (9780kB/s)(255MiB/27351msec) 00:21:50.118 slat (usec): min=3, max=105, avg= 9.52, stdev= 3.31 00:21:50.118 clat (usec): min=1173, max=333753, avg=41961.20, stdev=20560.57 00:21:50.118 lat (usec): min=1184, max=333762, avg=41970.72, stdev=20561.01 00:21:50.118 clat percentiles (msec): 00:21:50.118 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:21:50.118 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:21:50.118 | 70.00th=[ 39], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 62], 00:21:50.118 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 222], 99.95th=[ 271], 00:21:50.118 | 99.99th=[ 326] 00:21:50.118 write: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(256MiB/22889msec); 0 zone resets 00:21:50.118 slat (usec): min=4, max=2389, avg=10.02, stdev=14.52 00:21:50.118 clat (usec): min=503, max=101320, avg=11571.93, stdev=18759.26 00:21:50.118 lat (usec): min=518, max=101329, avg=11581.94, stdev=18759.47 00:21:50.118 clat percentiles (usec): 00:21:50.118 | 1.00th=[ 1188], 5.00th=[ 1565], 10.00th=[ 1827], 20.00th=[ 2245], 00:21:50.118 | 30.00th=[ 3916], 40.00th=[ 5932], 50.00th=[ 7111], 60.00th=[ 8225], 00:21:50.118 | 70.00th=[ 9372], 80.00th=[ 11731], 90.00th=[ 14615], 95.00th=[ 48497], 00:21:50.118 | 99.00th=[ 94897], 99.50th=[ 95945], 99.90th=[ 98042], 99.95th=[ 99091], 00:21:50.118 | 99.99th=[100140] 00:21:50.118 bw ( KiB/s): min= 335, max=40048, per=92.93%, avg=19418.04, stdev=13547.60, samples=27 00:21:50.118 iops : min= 83, max=10012, avg=4854.48, stdev=3386.92, samples=27 00:21:50.118 lat (usec) : 750=0.03%, 1000=0.15% 00:21:50.118 lat (msec) : 2=7.04%, 4=8.22%, 10=21.31%, 20=9.99%, 50=46.27% 00:21:50.118 lat (msec) : 100=5.68%, 250=1.28%, 500=0.03% 00:21:50.118 cpu : usr=99.12%, sys=0.21%, ctx=67, majf=0, minf=5565 00:21:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.118 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:50.118 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:50.118 second_half: (groupid=0, jobs=1): err= 0: pid=77129: Wed Nov 27 12:05:36 2024 00:21:50.118 read: IOPS=2373, BW=9495KiB/s (9722kB/s)(255MiB/27516msec) 00:21:50.118 slat (nsec): min=3425, max=51126, avg=7949.76, stdev=4435.88 00:21:50.118 clat (usec): min=1145, max=339546, avg=41586.11, stdev=25466.02 00:21:50.118 lat (usec): min=1151, max=339551, avg=41594.06, stdev=25467.04 00:21:50.118 clat percentiles (msec): 00:21:50.118 | 1.00th=[ 8], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:21:50.118 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:21:50.118 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 46], 95.00th=[ 60], 00:21:50.118 | 99.00th=[ 190], 99.50th=[ 211], 99.90th=[ 239], 99.95th=[ 268], 00:21:50.118 | 99.99th=[ 334] 00:21:50.118 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(256MiB/25091msec); 0 zone resets 00:21:50.118 slat (usec): min=4, max=633, avg= 9.82, stdev= 7.34 00:21:50.118 clat (usec): min=393, max=101970, avg=12265.24, stdev=20285.80 00:21:50.119 lat (usec): min=415, max=101977, avg=12275.05, stdev=20286.40 00:21:50.119 clat percentiles (usec): 00:21:50.119 | 1.00th=[ 1106], 5.00th=[ 1418], 10.00th=[ 1631], 20.00th=[ 1909], 00:21:50.119 | 30.00th=[ 2180], 40.00th=[ 3621], 50.00th=[ 5669], 60.00th=[ 7177], 00:21:50.119 | 70.00th=[ 9241], 80.00th=[ 12911], 90.00th=[ 36439], 95.00th=[ 56886], 00:21:50.119 | 99.00th=[ 95945], 99.50th=[ 96994], 99.90th=[ 99091], 99.95th=[100140], 00:21:50.119 | 99.99th=[101188] 00:21:50.119 bw ( KiB/s): min= 520, max=53880, per=92.94%, avg=19420.70, stdev=16184.90, samples=27 00:21:50.119 iops : min= 130, max=13470, avg=4855.15, stdev=4046.19, samples=27 00:21:50.119 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.19% 00:21:50.119 lat (msec) : 2=11.80%, 4=9.05%, 10=16.81%, 20=7.73%, 50=48.86% 00:21:50.119 lat (msec) : 100=3.85%, 250=1.63%, 500=0.04% 00:21:50.119 cpu : usr=99.24%, sys=0.21%, ctx=51, majf=0, minf=5542 00:21:50.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:50.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:50.119 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:50.119 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:50.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:50.119 00:21:50.119 Run status group 0 (all jobs): 00:21:50.119 READ: bw=18.5MiB/s (19.4MB/s), 9495KiB/s-9551KiB/s (9722kB/s-9780kB/s), io=510MiB (535MB), run=27351-27516msec 00:21:50.119 WRITE: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-11.2MiB/s (10.7MB/s-11.7MB/s), io=512MiB (537MB), run=22889-25091msec 00:21:50.119 ----------------------------------------------------- 00:21:50.119 Suppressions used: 00:21:50.119 count bytes template 00:21:50.119 2 10 /usr/src/fio/parse.c 00:21:50.119 4 384 /usr/src/fio/iolog.c 00:21:50.119 1 8 libtcmalloc_minimal.so 00:21:50.119 1 904 libcrypto.so 00:21:50.119 ----------------------------------------------------- 00:21:50.119 00:21:50.119 12:05:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:50.119 12:05:38 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.119 12:05:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:50.119 12:05:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:50.119 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:50.119 fio-3.35 00:21:50.119 Starting 1 thread 00:22:08.207 00:22:08.207 test: (groupid=0, jobs=1): err= 0: pid=77480: Wed Nov 27 12:05:57 2024 00:22:08.207 read: IOPS=6283, BW=24.5MiB/s (25.7MB/s)(255MiB/10376msec) 00:22:08.207 slat (nsec): min=3147, max=40362, avg=7539.87, stdev=3277.69 00:22:08.207 clat (usec): min=672, max=40090, avg=20358.02, stdev=1266.49 00:22:08.207 lat (usec): min=683, max=40101, avg=20365.56, stdev=1266.42 00:22:08.207 clat percentiles (usec): 00:22:08.207 | 1.00th=[19268], 5.00th=[19530], 10.00th=[19530], 20.00th=[19792], 00:22:08.207 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:22:08.207 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21365], 00:22:08.207 | 99.00th=[24511], 99.50th=[28967], 99.90th=[33817], 99.95th=[35390], 00:22:08.207 | 99.99th=[39584] 00:22:08.207 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(256MiB/6317msec); 0 zone resets 00:22:08.207 slat (usec): min=4, max=743, avg= 8.65, stdev= 8.41 00:22:08.207 clat (usec): min=765, max=70145, avg=12279.04, stdev=14939.01 00:22:08.207 lat (usec): min=786, max=70153, avg=12287.70, stdev=14938.99 00:22:08.207 clat percentiles (usec): 00:22:08.207 | 1.00th=[ 1237], 5.00th=[ 1500], 10.00th=[ 1713], 20.00th=[ 1942], 00:22:08.207 | 30.00th=[ 2147], 40.00th=[ 2573], 50.00th=[ 8029], 60.00th=[ 9765], 00:22:08.207 | 70.00th=[10814], 80.00th=[13042], 90.00th=[44827], 95.00th=[46400], 00:22:08.207 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51643], 99.95th=[56886], 00:22:08.207 | 99.99th=[65799] 00:22:08.207 bw ( KiB/s): min=22024, max=55240, per=97.18%, avg=40329.85, stdev=8579.40, samples=13 00:22:08.207 iops : min= 5506, max=13810, avg=10082.46, stdev=2144.85, samples=13 00:22:08.207 lat (usec) : 750=0.01%, 1000=0.04% 00:22:08.207 lat (msec) : 2=11.39%, 4=9.39%, 10=10.51%, 20=29.11%, 50=39.41% 00:22:08.207 lat (msec) : 100=0.13% 00:22:08.207 cpu : usr=98.99%, sys=0.29%, ctx=35, majf=0, minf=5565 00:22:08.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.207 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.207 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.207 00:22:08.207 Run status group 0 (all jobs): 00:22:08.207 READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=255MiB (267MB), run=10376-10376msec 00:22:08.207 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=256MiB (268MB), run=6317-6317msec 00:22:09.585 ----------------------------------------------------- 00:22:09.585 Suppressions used: 00:22:09.585 count bytes template 00:22:09.585 1 5 /usr/src/fio/parse.c 00:22:09.585 2 192 /usr/src/fio/iolog.c 00:22:09.585 1 8 libtcmalloc_minimal.so 00:22:09.585 1 904 libcrypto.so 00:22:09.585 ----------------------------------------------------- 00:22:09.585 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:09.586 Remove shared memory files 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57723 /dev/shm/spdk_tgt_trace.pid75689 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:09.586 ************************************ 00:22:09.586 END TEST ftl_fio_basic 00:22:09.586 ************************************ 00:22:09.586 00:22:09.586 real 1m15.916s 00:22:09.586 user 2m46.638s 00:22:09.586 sys 0m4.044s 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.586 12:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:09.586 12:05:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:09.586 12:05:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:09.586 12:05:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.586 12:05:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:09.586 ************************************ 00:22:09.586 START TEST ftl_bdevperf 00:22:09.586 ************************************ 00:22:09.586 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:09.846 * Looking for test storage... 00:22:09.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:09.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.846 --rc genhtml_branch_coverage=1 00:22:09.846 --rc genhtml_function_coverage=1 00:22:09.846 --rc genhtml_legend=1 00:22:09.846 --rc geninfo_all_blocks=1 00:22:09.846 --rc geninfo_unexecuted_blocks=1 00:22:09.846 00:22:09.846 ' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:09.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.846 --rc genhtml_branch_coverage=1 00:22:09.846 --rc genhtml_function_coverage=1 00:22:09.846 --rc genhtml_legend=1 00:22:09.846 --rc geninfo_all_blocks=1 00:22:09.846 --rc geninfo_unexecuted_blocks=1 00:22:09.846 00:22:09.846 ' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:09.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.846 --rc genhtml_branch_coverage=1 00:22:09.846 --rc genhtml_function_coverage=1 00:22:09.846 --rc genhtml_legend=1 00:22:09.846 --rc geninfo_all_blocks=1 00:22:09.846 --rc geninfo_unexecuted_blocks=1 00:22:09.846 00:22:09.846 ' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:09.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.846 --rc genhtml_branch_coverage=1 00:22:09.846 --rc genhtml_function_coverage=1 00:22:09.846 --rc genhtml_legend=1 00:22:09.846 --rc geninfo_all_blocks=1 00:22:09.846 --rc geninfo_unexecuted_blocks=1 00:22:09.846 00:22:09.846 ' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77753 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77753 00:22:09.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77753 ']' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.846 12:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:10.105 [2024-11-27 12:05:59.962446] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:10.105 [2024-11-27 12:05:59.962731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77753 ] 00:22:10.105 [2024-11-27 12:06:00.149873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.363 [2024-11-27 12:06:00.257123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.930 12:06:00 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:10.931 12:06:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:11.189 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:11.448 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:11.448 { 00:22:11.448 "name": "nvme0n1", 00:22:11.448 "aliases": [ 00:22:11.448 "058ea58f-9528-47c9-9abd-b7fdccb96fc3" 00:22:11.448 ], 00:22:11.448 "product_name": "NVMe disk", 00:22:11.448 "block_size": 4096, 00:22:11.448 "num_blocks": 1310720, 00:22:11.448 "uuid": "058ea58f-9528-47c9-9abd-b7fdccb96fc3", 00:22:11.448 "numa_id": -1, 00:22:11.448 "assigned_rate_limits": { 00:22:11.448 "rw_ios_per_sec": 0, 00:22:11.448 "rw_mbytes_per_sec": 0, 00:22:11.448 "r_mbytes_per_sec": 0, 00:22:11.448 "w_mbytes_per_sec": 0 00:22:11.448 }, 00:22:11.448 "claimed": true, 00:22:11.448 "claim_type": "read_many_write_one", 00:22:11.448 "zoned": false, 00:22:11.448 "supported_io_types": { 00:22:11.448 "read": true, 00:22:11.448 "write": true, 00:22:11.448 "unmap": true, 00:22:11.448 "flush": true, 00:22:11.448 "reset": true, 00:22:11.448 "nvme_admin": true, 00:22:11.448 "nvme_io": true, 00:22:11.448 "nvme_io_md": false, 00:22:11.448 "write_zeroes": true, 00:22:11.448 "zcopy": false, 00:22:11.448 "get_zone_info": false, 00:22:11.448 "zone_management": false, 00:22:11.448 "zone_append": false, 00:22:11.448 "compare": true, 00:22:11.448 "compare_and_write": false, 00:22:11.448 "abort": true, 00:22:11.448 "seek_hole": false, 00:22:11.448 "seek_data": false, 00:22:11.448 "copy": true, 00:22:11.448 "nvme_iov_md": false 00:22:11.448 }, 00:22:11.448 "driver_specific": { 00:22:11.448 "nvme": [ 00:22:11.448 { 00:22:11.448 "pci_address": "0000:00:11.0", 00:22:11.448 "trid": { 00:22:11.448 "trtype": "PCIe", 00:22:11.448 "traddr": "0000:00:11.0" 00:22:11.448 }, 00:22:11.448 "ctrlr_data": { 00:22:11.448 "cntlid": 0, 00:22:11.448 "vendor_id": "0x1b36", 00:22:11.449 "model_number": "QEMU NVMe Ctrl", 00:22:11.449 "serial_number": "12341", 00:22:11.449 "firmware_revision": "8.0.0", 00:22:11.449 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:11.449 "oacs": { 00:22:11.449 "security": 0, 00:22:11.449 "format": 1, 00:22:11.449 "firmware": 0, 00:22:11.449 "ns_manage": 1 00:22:11.449 }, 00:22:11.449 "multi_ctrlr": false, 00:22:11.449 "ana_reporting": false 00:22:11.449 }, 00:22:11.449 "vs": { 00:22:11.449 "nvme_version": "1.4" 00:22:11.449 }, 00:22:11.449 "ns_data": { 00:22:11.449 "id": 1, 00:22:11.449 "can_share": false 00:22:11.449 } 00:22:11.449 } 00:22:11.449 ], 00:22:11.449 "mp_policy": "active_passive" 00:22:11.449 } 00:22:11.449 } 00:22:11.449 ]' 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:11.449 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:11.708 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8d31535f-a50b-4530-8571-8025796c7697 00:22:11.708 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:11.708 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d31535f-a50b-4530-8571-8025796c7697 00:22:11.966 12:06:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:11.966 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=51394a6a-8ad0-465f-8819-f281cd6348e0 00:22:11.966 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 51394a6a-8ad0-465f-8819-f281cd6348e0 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:12.225 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.484 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:12.484 { 00:22:12.484 "name": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:12.484 "aliases": [ 00:22:12.484 "lvs/nvme0n1p0" 00:22:12.484 ], 00:22:12.484 "product_name": "Logical Volume", 00:22:12.484 "block_size": 4096, 00:22:12.484 "num_blocks": 26476544, 00:22:12.484 "uuid": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:12.484 "assigned_rate_limits": { 00:22:12.484 "rw_ios_per_sec": 0, 00:22:12.484 "rw_mbytes_per_sec": 0, 00:22:12.484 "r_mbytes_per_sec": 0, 00:22:12.484 "w_mbytes_per_sec": 0 00:22:12.484 }, 00:22:12.484 "claimed": false, 00:22:12.484 "zoned": false, 00:22:12.484 "supported_io_types": { 00:22:12.484 "read": true, 00:22:12.484 "write": true, 00:22:12.484 "unmap": true, 00:22:12.484 "flush": false, 00:22:12.484 "reset": true, 00:22:12.484 "nvme_admin": false, 00:22:12.484 "nvme_io": false, 00:22:12.484 "nvme_io_md": false, 00:22:12.484 "write_zeroes": true, 00:22:12.484 "zcopy": false, 00:22:12.484 "get_zone_info": false, 00:22:12.484 "zone_management": false, 00:22:12.484 "zone_append": false, 00:22:12.484 "compare": false, 00:22:12.484 "compare_and_write": false, 00:22:12.484 "abort": false, 00:22:12.484 "seek_hole": true, 00:22:12.484 "seek_data": true, 00:22:12.484 "copy": false, 00:22:12.484 "nvme_iov_md": false 00:22:12.484 }, 00:22:12.484 "driver_specific": { 00:22:12.484 "lvol": { 00:22:12.484 "lvol_store_uuid": "51394a6a-8ad0-465f-8819-f281cd6348e0", 00:22:12.484 "base_bdev": "nvme0n1", 00:22:12.484 "thin_provision": true, 00:22:12.484 "num_allocated_clusters": 0, 00:22:12.484 "snapshot": false, 00:22:12.484 "clone": false, 00:22:12.484 "esnap_clone": false 00:22:12.485 } 00:22:12.485 } 00:22:12.485 } 00:22:12.485 ]' 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:12.485 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:12.744 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:13.003 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:13.003 { 00:22:13.003 "name": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:13.003 "aliases": [ 00:22:13.003 "lvs/nvme0n1p0" 00:22:13.003 ], 00:22:13.003 "product_name": "Logical Volume", 00:22:13.003 "block_size": 4096, 00:22:13.003 "num_blocks": 26476544, 00:22:13.003 "uuid": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:13.003 "assigned_rate_limits": { 00:22:13.003 "rw_ios_per_sec": 0, 00:22:13.003 "rw_mbytes_per_sec": 0, 00:22:13.003 "r_mbytes_per_sec": 0, 00:22:13.003 "w_mbytes_per_sec": 0 00:22:13.003 }, 00:22:13.003 "claimed": false, 00:22:13.003 "zoned": false, 00:22:13.003 "supported_io_types": { 00:22:13.003 "read": true, 00:22:13.003 "write": true, 00:22:13.003 "unmap": true, 00:22:13.003 "flush": false, 00:22:13.003 "reset": true, 00:22:13.003 "nvme_admin": false, 00:22:13.003 "nvme_io": false, 00:22:13.003 "nvme_io_md": false, 00:22:13.003 "write_zeroes": true, 00:22:13.003 "zcopy": false, 00:22:13.003 "get_zone_info": false, 00:22:13.003 "zone_management": false, 00:22:13.003 "zone_append": false, 00:22:13.003 "compare": false, 00:22:13.003 "compare_and_write": false, 00:22:13.003 "abort": false, 00:22:13.003 "seek_hole": true, 00:22:13.003 "seek_data": true, 00:22:13.003 "copy": false, 00:22:13.003 "nvme_iov_md": false 00:22:13.003 }, 00:22:13.003 "driver_specific": { 00:22:13.003 "lvol": { 00:22:13.003 "lvol_store_uuid": "51394a6a-8ad0-465f-8819-f281cd6348e0", 00:22:13.003 "base_bdev": "nvme0n1", 00:22:13.003 "thin_provision": true, 00:22:13.003 "num_allocated_clusters": 0, 00:22:13.003 "snapshot": false, 00:22:13.003 "clone": false, 00:22:13.003 "esnap_clone": false 00:22:13.003 } 00:22:13.003 } 00:22:13.003 } 00:22:13.003 ]' 00:22:13.003 12:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:13.003 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:13.003 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:13.262 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 91a8625f-d67e-4bf0-917a-65f7b6f6b200 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:13.522 { 00:22:13.522 "name": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:13.522 "aliases": [ 00:22:13.522 "lvs/nvme0n1p0" 00:22:13.522 ], 00:22:13.522 "product_name": "Logical Volume", 00:22:13.522 "block_size": 4096, 00:22:13.522 "num_blocks": 26476544, 00:22:13.522 "uuid": "91a8625f-d67e-4bf0-917a-65f7b6f6b200", 00:22:13.522 "assigned_rate_limits": { 00:22:13.522 "rw_ios_per_sec": 0, 00:22:13.522 "rw_mbytes_per_sec": 0, 00:22:13.522 "r_mbytes_per_sec": 0, 00:22:13.522 "w_mbytes_per_sec": 0 00:22:13.522 }, 00:22:13.522 "claimed": false, 00:22:13.522 "zoned": false, 00:22:13.522 "supported_io_types": { 00:22:13.522 "read": true, 00:22:13.522 "write": true, 00:22:13.522 "unmap": true, 00:22:13.522 "flush": false, 00:22:13.522 "reset": true, 00:22:13.522 "nvme_admin": false, 00:22:13.522 "nvme_io": false, 00:22:13.522 "nvme_io_md": false, 00:22:13.522 "write_zeroes": true, 00:22:13.522 "zcopy": false, 00:22:13.522 "get_zone_info": false, 00:22:13.522 "zone_management": false, 00:22:13.522 "zone_append": false, 00:22:13.522 "compare": false, 00:22:13.522 "compare_and_write": false, 00:22:13.522 "abort": false, 00:22:13.522 "seek_hole": true, 00:22:13.522 "seek_data": true, 00:22:13.522 "copy": false, 00:22:13.522 "nvme_iov_md": false 00:22:13.522 }, 00:22:13.522 "driver_specific": { 00:22:13.522 "lvol": { 00:22:13.522 "lvol_store_uuid": "51394a6a-8ad0-465f-8819-f281cd6348e0", 00:22:13.522 "base_bdev": "nvme0n1", 00:22:13.522 "thin_provision": true, 00:22:13.522 "num_allocated_clusters": 0, 00:22:13.522 "snapshot": false, 00:22:13.522 "clone": false, 00:22:13.522 "esnap_clone": false 00:22:13.522 } 00:22:13.522 } 00:22:13.522 } 00:22:13.522 ]' 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:13.522 12:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 91a8625f-d67e-4bf0-917a-65f7b6f6b200 -c nvc0n1p0 --l2p_dram_limit 20 00:22:13.782 [2024-11-27 12:06:03.731960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.732014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:13.782 [2024-11-27 12:06:03.732031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:13.782 [2024-11-27 12:06:03.732044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.732108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.732122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.782 [2024-11-27 12:06:03.732132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:13.782 [2024-11-27 12:06:03.732144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.732172] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:13.782 [2024-11-27 12:06:03.733203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:13.782 [2024-11-27 12:06:03.733232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.733245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.782 [2024-11-27 12:06:03.733257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:22:13.782 [2024-11-27 12:06:03.733269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.733347] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 653e043b-91c6-489c-b968-3833d489f2c5 00:22:13.782 [2024-11-27 12:06:03.734849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.734874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:13.782 [2024-11-27 12:06:03.734894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:13.782 [2024-11-27 12:06:03.734904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.742603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.742631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.782 [2024-11-27 12:06:03.742646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.663 ms 00:22:13.782 [2024-11-27 12:06:03.742659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.742761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.742775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.782 [2024-11-27 12:06:03.742793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:13.782 [2024-11-27 12:06:03.742803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.742860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.742872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:13.782 [2024-11-27 12:06:03.742885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:13.782 [2024-11-27 12:06:03.742895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.742937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:13.782 [2024-11-27 12:06:03.748345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.748383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.782 [2024-11-27 12:06:03.748411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:22:13.782 [2024-11-27 12:06:03.748428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.748459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.748473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:13.782 [2024-11-27 12:06:03.748484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:13.782 [2024-11-27 12:06:03.748496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.748537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:13.782 [2024-11-27 12:06:03.748666] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:13.782 [2024-11-27 12:06:03.748680] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:13.782 [2024-11-27 12:06:03.748696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:13.782 [2024-11-27 12:06:03.748709] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:13.782 [2024-11-27 12:06:03.748723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:13.782 [2024-11-27 12:06:03.748734] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:13.782 [2024-11-27 12:06:03.748763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:13.782 [2024-11-27 12:06:03.748773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:13.782 [2024-11-27 12:06:03.748785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:13.782 [2024-11-27 12:06:03.748798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.748810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:13.782 [2024-11-27 12:06:03.748821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:22:13.782 [2024-11-27 12:06:03.748833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.748903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.782 [2024-11-27 12:06:03.748917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:13.782 [2024-11-27 12:06:03.748927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:13.782 [2024-11-27 12:06:03.748942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.782 [2024-11-27 12:06:03.749022] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:13.782 [2024-11-27 12:06:03.749039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:13.782 [2024-11-27 12:06:03.749049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.782 [2024-11-27 12:06:03.749062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:13.782 [2024-11-27 12:06:03.749084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:13.782 [2024-11-27 12:06:03.749107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:13.782 [2024-11-27 12:06:03.749116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.782 [2024-11-27 12:06:03.749137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:13.782 [2024-11-27 12:06:03.749160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:13.782 [2024-11-27 12:06:03.749170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:13.782 [2024-11-27 12:06:03.749182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:13.782 [2024-11-27 12:06:03.749192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:13.782 [2024-11-27 12:06:03.749206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:13.782 [2024-11-27 12:06:03.749227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:13.782 [2024-11-27 12:06:03.749235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:13.782 [2024-11-27 12:06:03.749259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:13.782 [2024-11-27 12:06:03.749271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.782 [2024-11-27 12:06:03.749280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:13.782 [2024-11-27 12:06:03.749292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.783 [2024-11-27 12:06:03.749312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:13.783 [2024-11-27 12:06:03.749321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.783 [2024-11-27 12:06:03.749342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:13.783 [2024-11-27 12:06:03.749355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:13.783 [2024-11-27 12:06:03.749378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:13.783 [2024-11-27 12:06:03.749399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.783 [2024-11-27 12:06:03.749421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:13.783 [2024-11-27 12:06:03.749433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:13.783 [2024-11-27 12:06:03.749442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:13.783 [2024-11-27 12:06:03.749453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:13.783 [2024-11-27 12:06:03.749463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:13.783 [2024-11-27 12:06:03.749475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:13.783 [2024-11-27 12:06:03.749496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:13.783 [2024-11-27 12:06:03.749505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:13.783 [2024-11-27 12:06:03.749527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:13.783 [2024-11-27 12:06:03.749540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:13.783 [2024-11-27 12:06:03.749550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:13.783 [2024-11-27 12:06:03.749567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:13.783 [2024-11-27 12:06:03.749577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:13.783 [2024-11-27 12:06:03.749589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:13.783 [2024-11-27 12:06:03.749598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:13.783 [2024-11-27 12:06:03.749610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:13.783 [2024-11-27 12:06:03.749620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:13.783 [2024-11-27 12:06:03.749636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:13.783 [2024-11-27 12:06:03.749649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:13.783 [2024-11-27 12:06:03.749674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:13.783 [2024-11-27 12:06:03.749687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:13.783 [2024-11-27 12:06:03.749697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:13.783 [2024-11-27 12:06:03.749710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:13.783 [2024-11-27 12:06:03.749720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:13.783 [2024-11-27 12:06:03.749743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:13.783 [2024-11-27 12:06:03.749753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:13.783 [2024-11-27 12:06:03.749769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:13.783 [2024-11-27 12:06:03.749779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:13.783 [2024-11-27 12:06:03.749841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:13.783 [2024-11-27 12:06:03.749852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:13.783 [2024-11-27 12:06:03.749882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:13.783 [2024-11-27 12:06:03.749895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:13.783 [2024-11-27 12:06:03.749905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:13.783 [2024-11-27 12:06:03.749919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.783 [2024-11-27 12:06:03.749930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:13.783 [2024-11-27 12:06:03.749943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:22:13.783 [2024-11-27 12:06:03.749954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.783 [2024-11-27 12:06:03.749994] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:13.783 [2024-11-27 12:06:03.750006] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:17.981 [2024-11-27 12:06:07.744826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.744886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:17.981 [2024-11-27 12:06:07.744924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4001.313 ms 00:22:17.981 [2024-11-27 12:06:07.744935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.785599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.785641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.981 [2024-11-27 12:06:07.785659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.407 ms 00:22:17.981 [2024-11-27 12:06:07.785670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.785804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.785818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:17.981 [2024-11-27 12:06:07.785834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:17.981 [2024-11-27 12:06:07.785845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.846927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.846968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.981 [2024-11-27 12:06:07.846985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.135 ms 00:22:17.981 [2024-11-27 12:06:07.846997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.847038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.847050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.981 [2024-11-27 12:06:07.847064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:17.981 [2024-11-27 12:06:07.847077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.847617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.847639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.981 [2024-11-27 12:06:07.847653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:22:17.981 [2024-11-27 12:06:07.847664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.847900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.847916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.981 [2024-11-27 12:06:07.847934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:17.981 [2024-11-27 12:06:07.847944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.868809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.868842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.981 [2024-11-27 12:06:07.868859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.873 ms 00:22:17.981 [2024-11-27 12:06:07.868882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.881331] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:17.981 [2024-11-27 12:06:07.887267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.887298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:17.981 [2024-11-27 12:06:07.887311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.338 ms 00:22:17.981 [2024-11-27 12:06:07.887323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.978014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.978074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:17.981 [2024-11-27 12:06:07.978091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.808 ms 00:22:17.981 [2024-11-27 12:06:07.978105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:07.978292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:07.978312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:17.981 [2024-11-27 12:06:07.978323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:22:17.981 [2024-11-27 12:06:07.978339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.981 [2024-11-27 12:06:08.013420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.981 [2024-11-27 12:06:08.013459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:17.981 [2024-11-27 12:06:08.013473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.075 ms 00:22:17.981 [2024-11-27 12:06:08.013486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.048346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.048408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:18.241 [2024-11-27 12:06:08.048422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.877 ms 00:22:18.241 [2024-11-27 12:06:08.048434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.049170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.049192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.241 [2024-11-27 12:06:08.049205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:22:18.241 [2024-11-27 12:06:08.049218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.145650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.145693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:18.241 [2024-11-27 12:06:08.145707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.518 ms 00:22:18.241 [2024-11-27 12:06:08.145720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.181548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.181588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:18.241 [2024-11-27 12:06:08.181604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.788 ms 00:22:18.241 [2024-11-27 12:06:08.181617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.216059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.216096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:18.241 [2024-11-27 12:06:08.216109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.458 ms 00:22:18.241 [2024-11-27 12:06:08.216120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.252566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.241 [2024-11-27 12:06:08.252614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.241 [2024-11-27 12:06:08.252643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.466 ms 00:22:18.241 [2024-11-27 12:06:08.252656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.241 [2024-11-27 12:06:08.252698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.242 [2024-11-27 12:06:08.252716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.242 [2024-11-27 12:06:08.252727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:18.242 [2024-11-27 12:06:08.252739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.242 [2024-11-27 12:06:08.252836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.242 [2024-11-27 12:06:08.252851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.242 [2024-11-27 12:06:08.252861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:18.242 [2024-11-27 12:06:08.252873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.242 [2024-11-27 12:06:08.253881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4528.817 ms, result 0 00:22:18.242 { 00:22:18.242 "name": "ftl0", 00:22:18.242 "uuid": "653e043b-91c6-489c-b968-3833d489f2c5" 00:22:18.242 } 00:22:18.242 12:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:18.242 12:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:18.242 12:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:18.501 12:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:18.761 [2024-11-27 12:06:08.569838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:18.761 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:18.761 Zero copy mechanism will not be used. 00:22:18.761 Running I/O for 4 seconds... 00:22:20.635 1396.00 IOPS, 92.70 MiB/s [2024-11-27T12:06:11.625Z] 1453.00 IOPS, 96.49 MiB/s [2024-11-27T12:06:13.005Z] 1492.33 IOPS, 99.10 MiB/s [2024-11-27T12:06:13.005Z] 1516.50 IOPS, 100.71 MiB/s 00:22:22.952 Latency(us) 00:22:22.952 [2024-11-27T12:06:13.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.952 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:22.952 ftl0 : 4.00 1516.17 100.68 0.00 0.00 690.94 246.75 4526.98 00:22:22.952 [2024-11-27T12:06:13.005Z] =================================================================================================================== 00:22:22.952 [2024-11-27T12:06:13.005Z] Total : 1516.17 100.68 0.00 0.00 690.94 246.75 4526.98 00:22:22.952 [2024-11-27 12:06:12.573385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:22.952 { 00:22:22.952 "results": [ 00:22:22.952 { 00:22:22.952 "job": "ftl0", 00:22:22.952 "core_mask": "0x1", 00:22:22.952 "workload": "randwrite", 00:22:22.952 "status": "finished", 00:22:22.952 "queue_depth": 1, 00:22:22.952 "io_size": 69632, 00:22:22.952 "runtime": 4.001518, 00:22:22.952 "iops": 1516.1746117348466, 00:22:22.952 "mibps": 100.68347031051715, 00:22:22.952 "io_failed": 0, 00:22:22.952 "io_timeout": 0, 00:22:22.952 "avg_latency_us": 690.9438954433193, 00:22:22.952 "min_latency_us": 246.74698795180723, 00:22:22.952 "max_latency_us": 4526.984738955824 00:22:22.952 } 00:22:22.952 ], 00:22:22.952 "core_count": 1 00:22:22.952 } 00:22:22.952 12:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:22.952 [2024-11-27 12:06:12.705887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:22.952 Running I/O for 4 seconds... 00:22:24.824 11704.00 IOPS, 45.72 MiB/s [2024-11-27T12:06:15.816Z] 11539.50 IOPS, 45.08 MiB/s [2024-11-27T12:06:16.753Z] 11428.33 IOPS, 44.64 MiB/s [2024-11-27T12:06:16.753Z] 11416.00 IOPS, 44.59 MiB/s 00:22:26.700 Latency(us) 00:22:26.700 [2024-11-27T12:06:16.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.700 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.700 ftl0 : 4.02 11401.75 44.54 0.00 0.00 11202.34 246.75 21476.86 00:22:26.700 [2024-11-27T12:06:16.753Z] =================================================================================================================== 00:22:26.700 [2024-11-27T12:06:16.753Z] Total : 11401.75 44.54 0.00 0.00 11202.34 0.00 21476.86 00:22:26.700 { 00:22:26.700 "results": [ 00:22:26.700 { 00:22:26.700 "job": "ftl0", 00:22:26.700 "core_mask": "0x1", 00:22:26.700 "workload": "randwrite", 00:22:26.700 "status": "finished", 00:22:26.700 "queue_depth": 128, 00:22:26.700 "io_size": 4096, 00:22:26.700 "runtime": 4.016227, 00:22:26.700 "iops": 11401.745967048177, 00:22:26.700 "mibps": 44.53807018378194, 00:22:26.700 "io_failed": 0, 00:22:26.700 "io_timeout": 0, 00:22:26.700 "avg_latency_us": 11202.340877696672, 00:22:26.700 "min_latency_us": 246.74698795180723, 00:22:26.700 "max_latency_us": 21476.857831325302 00:22:26.700 } 00:22:26.700 ], 00:22:26.700 "core_count": 1 00:22:26.700 } 00:22:26.700 [2024-11-27 12:06:16.725398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:26.700 12:06:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:26.959 [2024-11-27 12:06:16.845088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:26.959 Running I/O for 4 seconds... 00:22:28.833 6593.00 IOPS, 25.75 MiB/s [2024-11-27T12:06:19.864Z] 7341.50 IOPS, 28.68 MiB/s [2024-11-27T12:06:20.859Z] 7642.00 IOPS, 29.85 MiB/s [2024-11-27T12:06:20.859Z] 7458.75 IOPS, 29.14 MiB/s 00:22:30.806 Latency(us) 00:22:30.806 [2024-11-27T12:06:20.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.806 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.806 Verification LBA range: start 0x0 length 0x1400000 00:22:30.806 ftl0 : 4.01 7472.15 29.19 0.00 0.00 17080.16 266.49 21582.14 00:22:30.806 [2024-11-27T12:06:20.859Z] =================================================================================================================== 00:22:30.806 [2024-11-27T12:06:20.859Z] Total : 7472.15 29.19 0.00 0.00 17080.16 0.00 21582.14 00:22:31.066 [2024-11-27 12:06:20.867607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:31.066 { 00:22:31.066 "results": [ 00:22:31.066 { 00:22:31.066 "job": "ftl0", 00:22:31.066 "core_mask": "0x1", 00:22:31.066 "workload": "verify", 00:22:31.066 "status": "finished", 00:22:31.066 "verify_range": { 00:22:31.066 "start": 0, 00:22:31.066 "length": 20971520 00:22:31.066 }, 00:22:31.066 "queue_depth": 128, 00:22:31.066 "io_size": 4096, 00:22:31.066 "runtime": 4.009959, 00:22:31.066 "iops": 7472.146223938948, 00:22:31.066 "mibps": 29.188071187261517, 00:22:31.066 "io_failed": 0, 00:22:31.066 "io_timeout": 0, 00:22:31.066 "avg_latency_us": 17080.160437551694, 00:22:31.066 "min_latency_us": 266.4867469879518, 00:22:31.066 "max_latency_us": 21582.136546184738 00:22:31.066 } 00:22:31.066 ], 00:22:31.066 "core_count": 1 00:22:31.066 } 00:22:31.066 12:06:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:31.066 [2024-11-27 12:06:21.078551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.066 [2024-11-27 12:06:21.078601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:31.066 [2024-11-27 12:06:21.078617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:31.066 [2024-11-27 12:06:21.078631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.066 [2024-11-27 12:06:21.078654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.066 [2024-11-27 12:06:21.082991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.066 [2024-11-27 12:06:21.083028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:31.066 [2024-11-27 12:06:21.083044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.315 ms 00:22:31.066 [2024-11-27 12:06:21.083054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.066 [2024-11-27 12:06:21.084948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.066 [2024-11-27 12:06:21.084990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:31.066 [2024-11-27 12:06:21.085009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.859 ms 00:22:31.066 [2024-11-27 12:06:21.085020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.325 [2024-11-27 12:06:21.289871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.325 [2024-11-27 12:06:21.289920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:31.325 [2024-11-27 12:06:21.289944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 205.158 ms 00:22:31.325 [2024-11-27 12:06:21.289956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.325 [2024-11-27 12:06:21.294924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.325 [2024-11-27 12:06:21.294956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:31.325 [2024-11-27 12:06:21.294987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:22:31.325 [2024-11-27 12:06:21.295001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.325 [2024-11-27 12:06:21.329796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.325 [2024-11-27 12:06:21.329842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:31.325 [2024-11-27 12:06:21.329873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.791 ms 00:22:31.325 [2024-11-27 12:06:21.329883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.325 [2024-11-27 12:06:21.351275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.325 [2024-11-27 12:06:21.351314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:31.325 [2024-11-27 12:06:21.351329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.382 ms 00:22:31.325 [2024-11-27 12:06:21.351339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.325 [2024-11-27 12:06:21.351530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.325 [2024-11-27 12:06:21.351546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:31.325 [2024-11-27 12:06:21.351562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:31.325 [2024-11-27 12:06:21.351572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.586 [2024-11-27 12:06:21.386769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.586 [2024-11-27 12:06:21.386803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:31.586 [2024-11-27 12:06:21.386835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.234 ms 00:22:31.587 [2024-11-27 12:06:21.386844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.587 [2024-11-27 12:06:21.421099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.587 [2024-11-27 12:06:21.421136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:31.587 [2024-11-27 12:06:21.421167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.251 ms 00:22:31.587 [2024-11-27 12:06:21.421177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.587 [2024-11-27 12:06:21.454851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.587 [2024-11-27 12:06:21.454884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:31.587 [2024-11-27 12:06:21.454916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.686 ms 00:22:31.587 [2024-11-27 12:06:21.454926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.587 [2024-11-27 12:06:21.488458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.587 [2024-11-27 12:06:21.488490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:31.587 [2024-11-27 12:06:21.488523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.479 ms 00:22:31.587 [2024-11-27 12:06:21.488532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.587 [2024-11-27 12:06:21.488572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:31.587 [2024-11-27 12:06:21.488588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.488995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:31.587 [2024-11-27 12:06:21.489319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:31.588 [2024-11-27 12:06:21.489856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:31.588 [2024-11-27 12:06:21.489868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 653e043b-91c6-489c-b968-3833d489f2c5 00:22:31.588 [2024-11-27 12:06:21.489882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:31.588 [2024-11-27 12:06:21.489894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:31.588 [2024-11-27 12:06:21.489904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:31.588 [2024-11-27 12:06:21.489917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:31.588 [2024-11-27 12:06:21.489927] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:31.588 [2024-11-27 12:06:21.489939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:31.588 [2024-11-27 12:06:21.489949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:31.588 [2024-11-27 12:06:21.489963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:31.588 [2024-11-27 12:06:21.489972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:31.588 [2024-11-27 12:06:21.489984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.588 [2024-11-27 12:06:21.489994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:31.588 [2024-11-27 12:06:21.490007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:22:31.588 [2024-11-27 12:06:21.490017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.509499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.588 [2024-11-27 12:06:21.509530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:31.588 [2024-11-27 12:06:21.509561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.459 ms 00:22:31.588 [2024-11-27 12:06:21.509572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.510180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.588 [2024-11-27 12:06:21.510202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:31.588 [2024-11-27 12:06:21.510216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:22:31.588 [2024-11-27 12:06:21.510236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.562419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.588 [2024-11-27 12:06:21.562450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.588 [2024-11-27 12:06:21.562469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.588 [2024-11-27 12:06:21.562480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.562543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.588 [2024-11-27 12:06:21.562555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.588 [2024-11-27 12:06:21.562567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.588 [2024-11-27 12:06:21.562577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.562669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.588 [2024-11-27 12:06:21.562683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.588 [2024-11-27 12:06:21.562696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.588 [2024-11-27 12:06:21.562705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.588 [2024-11-27 12:06:21.562724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.588 [2024-11-27 12:06:21.562734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.588 [2024-11-27 12:06:21.562747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.588 [2024-11-27 12:06:21.562757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.681817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.681869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.849 [2024-11-27 12:06:21.681905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.681916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.777750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.777802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.849 [2024-11-27 12:06:21.777817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.777827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.777958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.777972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.849 [2024-11-27 12:06:21.777985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.777995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.778056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.849 [2024-11-27 12:06:21.778075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.778085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.778230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.849 [2024-11-27 12:06:21.778246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.778256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.778308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:31.849 [2024-11-27 12:06:21.778320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.778330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.778401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:31.849 [2024-11-27 12:06:21.778414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.778435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.849 [2024-11-27 12:06:21.778492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:31.849 [2024-11-27 12:06:21.778505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.849 [2024-11-27 12:06:21.778515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.849 [2024-11-27 12:06:21.778644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 701.189 ms, result 0 00:22:31.849 true 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77753 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77753 ']' 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77753 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77753 00:22:31.850 killing process with pid 77753 00:22:31.850 Received shutdown signal, test time was about 4.000000 seconds 00:22:31.850 00:22:31.850 Latency(us) 00:22:31.850 [2024-11-27T12:06:21.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.850 [2024-11-27T12:06:21.903Z] =================================================================================================================== 00:22:31.850 [2024-11-27T12:06:21.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77753' 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77753 00:22:31.850 12:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77753 00:22:33.229 Remove shared memory files 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:33.229 00:22:33.229 real 0m23.542s 00:22:33.229 user 0m25.994s 00:22:33.229 sys 0m1.255s 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.229 ************************************ 00:22:33.229 END TEST ftl_bdevperf 00:22:33.229 ************************************ 00:22:33.229 12:06:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:33.229 12:06:23 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:33.229 12:06:23 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:33.229 12:06:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.229 12:06:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:33.229 ************************************ 00:22:33.229 START TEST ftl_trim 00:22:33.229 ************************************ 00:22:33.229 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:33.490 * Looking for test storage... 00:22:33.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.490 12:06:23 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.490 --rc genhtml_branch_coverage=1 00:22:33.490 --rc genhtml_function_coverage=1 00:22:33.490 --rc genhtml_legend=1 00:22:33.490 --rc geninfo_all_blocks=1 00:22:33.490 --rc geninfo_unexecuted_blocks=1 00:22:33.490 00:22:33.490 ' 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.490 --rc genhtml_branch_coverage=1 00:22:33.490 --rc genhtml_function_coverage=1 00:22:33.490 --rc genhtml_legend=1 00:22:33.490 --rc geninfo_all_blocks=1 00:22:33.490 --rc geninfo_unexecuted_blocks=1 00:22:33.490 00:22:33.490 ' 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.490 --rc genhtml_branch_coverage=1 00:22:33.490 --rc genhtml_function_coverage=1 00:22:33.490 --rc genhtml_legend=1 00:22:33.490 --rc geninfo_all_blocks=1 00:22:33.490 --rc geninfo_unexecuted_blocks=1 00:22:33.490 00:22:33.490 ' 00:22:33.490 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.490 --rc genhtml_branch_coverage=1 00:22:33.490 --rc genhtml_function_coverage=1 00:22:33.490 --rc genhtml_legend=1 00:22:33.490 --rc geninfo_all_blocks=1 00:22:33.490 --rc geninfo_unexecuted_blocks=1 00:22:33.490 00:22:33.490 ' 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:33.490 12:06:23 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78121 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78121 00:22:33.491 12:06:23 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78121 ']' 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.491 12:06:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:33.751 [2024-11-27 12:06:23.577462] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:33.751 [2024-11-27 12:06:23.577580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78121 ] 00:22:33.751 [2024-11-27 12:06:23.759483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:34.010 [2024-11-27 12:06:23.870095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.010 [2024-11-27 12:06:23.870232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.010 [2024-11-27 12:06:23.870276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.948 12:06:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.948 12:06:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:34.948 12:06:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:34.948 12:06:24 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:34.949 12:06:24 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:34.949 12:06:24 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:34.949 12:06:24 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:34.949 12:06:24 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:35.208 12:06:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:35.208 12:06:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:35.208 12:06:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:35.208 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:35.208 { 00:22:35.208 "name": "nvme0n1", 00:22:35.208 "aliases": [ 00:22:35.208 "3181e065-ef5b-408f-9b68-8330c3a2a9aa" 00:22:35.208 ], 00:22:35.208 "product_name": "NVMe disk", 00:22:35.208 "block_size": 4096, 00:22:35.208 "num_blocks": 1310720, 00:22:35.208 "uuid": "3181e065-ef5b-408f-9b68-8330c3a2a9aa", 00:22:35.208 "numa_id": -1, 00:22:35.208 "assigned_rate_limits": { 00:22:35.208 "rw_ios_per_sec": 0, 00:22:35.208 "rw_mbytes_per_sec": 0, 00:22:35.208 "r_mbytes_per_sec": 0, 00:22:35.208 "w_mbytes_per_sec": 0 00:22:35.208 }, 00:22:35.208 "claimed": true, 00:22:35.208 "claim_type": "read_many_write_one", 00:22:35.208 "zoned": false, 00:22:35.208 "supported_io_types": { 00:22:35.208 "read": true, 00:22:35.208 "write": true, 00:22:35.208 "unmap": true, 00:22:35.208 "flush": true, 00:22:35.208 "reset": true, 00:22:35.208 "nvme_admin": true, 00:22:35.208 "nvme_io": true, 00:22:35.208 "nvme_io_md": false, 00:22:35.208 "write_zeroes": true, 00:22:35.208 "zcopy": false, 00:22:35.208 "get_zone_info": false, 00:22:35.208 "zone_management": false, 00:22:35.208 "zone_append": false, 00:22:35.208 "compare": true, 00:22:35.208 "compare_and_write": false, 00:22:35.208 "abort": true, 00:22:35.208 "seek_hole": false, 00:22:35.208 "seek_data": false, 00:22:35.208 "copy": true, 00:22:35.208 "nvme_iov_md": false 00:22:35.208 }, 00:22:35.208 "driver_specific": { 00:22:35.208 "nvme": [ 00:22:35.208 { 00:22:35.208 "pci_address": "0000:00:11.0", 00:22:35.208 "trid": { 00:22:35.208 "trtype": "PCIe", 00:22:35.208 "traddr": "0000:00:11.0" 00:22:35.208 }, 00:22:35.208 "ctrlr_data": { 00:22:35.208 "cntlid": 0, 00:22:35.208 "vendor_id": "0x1b36", 00:22:35.208 "model_number": "QEMU NVMe Ctrl", 00:22:35.208 "serial_number": "12341", 00:22:35.208 "firmware_revision": "8.0.0", 00:22:35.208 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:35.208 "oacs": { 00:22:35.208 "security": 0, 00:22:35.208 "format": 1, 00:22:35.208 "firmware": 0, 00:22:35.208 "ns_manage": 1 00:22:35.208 }, 00:22:35.208 "multi_ctrlr": false, 00:22:35.208 "ana_reporting": false 00:22:35.208 }, 00:22:35.209 "vs": { 00:22:35.209 "nvme_version": "1.4" 00:22:35.209 }, 00:22:35.209 "ns_data": { 00:22:35.209 "id": 1, 00:22:35.209 "can_share": false 00:22:35.209 } 00:22:35.209 } 00:22:35.209 ], 00:22:35.209 "mp_policy": "active_passive" 00:22:35.209 } 00:22:35.209 } 00:22:35.209 ]' 00:22:35.209 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:35.468 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:35.468 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:35.468 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:35.468 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:35.468 12:06:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:35.468 12:06:25 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:35.468 12:06:25 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:35.468 12:06:25 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:35.468 12:06:25 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:35.468 12:06:25 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:35.727 12:06:25 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=51394a6a-8ad0-465f-8819-f281cd6348e0 00:22:35.727 12:06:25 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:35.727 12:06:25 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51394a6a-8ad0-465f-8819-f281cd6348e0 00:22:35.727 12:06:25 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:35.986 12:06:25 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d910be63-3b61-4c9f-bad5-993166c52464 00:22:35.986 12:06:25 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d910be63-3b61-4c9f-bad5-993166c52464 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:36.246 12:06:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.246 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.247 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:36.247 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:36.247 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:36.247 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.506 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:36.506 { 00:22:36.506 "name": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:36.506 "aliases": [ 00:22:36.506 "lvs/nvme0n1p0" 00:22:36.506 ], 00:22:36.506 "product_name": "Logical Volume", 00:22:36.506 "block_size": 4096, 00:22:36.506 "num_blocks": 26476544, 00:22:36.506 "uuid": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:36.506 "assigned_rate_limits": { 00:22:36.506 "rw_ios_per_sec": 0, 00:22:36.506 "rw_mbytes_per_sec": 0, 00:22:36.506 "r_mbytes_per_sec": 0, 00:22:36.506 "w_mbytes_per_sec": 0 00:22:36.506 }, 00:22:36.506 "claimed": false, 00:22:36.506 "zoned": false, 00:22:36.506 "supported_io_types": { 00:22:36.506 "read": true, 00:22:36.506 "write": true, 00:22:36.506 "unmap": true, 00:22:36.506 "flush": false, 00:22:36.506 "reset": true, 00:22:36.506 "nvme_admin": false, 00:22:36.506 "nvme_io": false, 00:22:36.506 "nvme_io_md": false, 00:22:36.506 "write_zeroes": true, 00:22:36.506 "zcopy": false, 00:22:36.506 "get_zone_info": false, 00:22:36.506 "zone_management": false, 00:22:36.506 "zone_append": false, 00:22:36.506 "compare": false, 00:22:36.506 "compare_and_write": false, 00:22:36.506 "abort": false, 00:22:36.506 "seek_hole": true, 00:22:36.506 "seek_data": true, 00:22:36.506 "copy": false, 00:22:36.506 "nvme_iov_md": false 00:22:36.506 }, 00:22:36.506 "driver_specific": { 00:22:36.506 "lvol": { 00:22:36.506 "lvol_store_uuid": "d910be63-3b61-4c9f-bad5-993166c52464", 00:22:36.506 "base_bdev": "nvme0n1", 00:22:36.506 "thin_provision": true, 00:22:36.506 "num_allocated_clusters": 0, 00:22:36.506 "snapshot": false, 00:22:36.506 "clone": false, 00:22:36.506 "esnap_clone": false 00:22:36.506 } 00:22:36.506 } 00:22:36.506 } 00:22:36.506 ]' 00:22:36.506 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:36.506 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:36.506 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:36.506 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:36.507 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:36.507 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:36.507 12:06:26 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:36.507 12:06:26 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:36.507 12:06:26 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:36.766 12:06:26 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:36.766 12:06:26 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:36.766 12:06:26 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.766 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:36.766 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:36.766 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:36.766 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:36.766 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:37.025 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.025 { 00:22:37.026 "name": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:37.026 "aliases": [ 00:22:37.026 "lvs/nvme0n1p0" 00:22:37.026 ], 00:22:37.026 "product_name": "Logical Volume", 00:22:37.026 "block_size": 4096, 00:22:37.026 "num_blocks": 26476544, 00:22:37.026 "uuid": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:37.026 "assigned_rate_limits": { 00:22:37.026 "rw_ios_per_sec": 0, 00:22:37.026 "rw_mbytes_per_sec": 0, 00:22:37.026 "r_mbytes_per_sec": 0, 00:22:37.026 "w_mbytes_per_sec": 0 00:22:37.026 }, 00:22:37.026 "claimed": false, 00:22:37.026 "zoned": false, 00:22:37.026 "supported_io_types": { 00:22:37.026 "read": true, 00:22:37.026 "write": true, 00:22:37.026 "unmap": true, 00:22:37.026 "flush": false, 00:22:37.026 "reset": true, 00:22:37.026 "nvme_admin": false, 00:22:37.026 "nvme_io": false, 00:22:37.026 "nvme_io_md": false, 00:22:37.026 "write_zeroes": true, 00:22:37.026 "zcopy": false, 00:22:37.026 "get_zone_info": false, 00:22:37.026 "zone_management": false, 00:22:37.026 "zone_append": false, 00:22:37.026 "compare": false, 00:22:37.026 "compare_and_write": false, 00:22:37.026 "abort": false, 00:22:37.026 "seek_hole": true, 00:22:37.026 "seek_data": true, 00:22:37.026 "copy": false, 00:22:37.026 "nvme_iov_md": false 00:22:37.026 }, 00:22:37.026 "driver_specific": { 00:22:37.026 "lvol": { 00:22:37.026 "lvol_store_uuid": "d910be63-3b61-4c9f-bad5-993166c52464", 00:22:37.026 "base_bdev": "nvme0n1", 00:22:37.026 "thin_provision": true, 00:22:37.026 "num_allocated_clusters": 0, 00:22:37.026 "snapshot": false, 00:22:37.026 "clone": false, 00:22:37.026 "esnap_clone": false 00:22:37.026 } 00:22:37.026 } 00:22:37.026 } 00:22:37.026 ]' 00:22:37.026 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.026 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.026 12:06:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.026 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.026 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.026 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.026 12:06:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:37.026 12:06:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:37.285 12:06:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:37.285 12:06:27 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:37.285 12:06:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:37.285 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:37.285 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:37.285 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:37.285 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:37.285 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d7feaf4-3d20-41c3-b19a-a3120c164d47 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.545 { 00:22:37.545 "name": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:37.545 "aliases": [ 00:22:37.545 "lvs/nvme0n1p0" 00:22:37.545 ], 00:22:37.545 "product_name": "Logical Volume", 00:22:37.545 "block_size": 4096, 00:22:37.545 "num_blocks": 26476544, 00:22:37.545 "uuid": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:37.545 "assigned_rate_limits": { 00:22:37.545 "rw_ios_per_sec": 0, 00:22:37.545 "rw_mbytes_per_sec": 0, 00:22:37.545 "r_mbytes_per_sec": 0, 00:22:37.545 "w_mbytes_per_sec": 0 00:22:37.545 }, 00:22:37.545 "claimed": false, 00:22:37.545 "zoned": false, 00:22:37.545 "supported_io_types": { 00:22:37.545 "read": true, 00:22:37.545 "write": true, 00:22:37.545 "unmap": true, 00:22:37.545 "flush": false, 00:22:37.545 "reset": true, 00:22:37.545 "nvme_admin": false, 00:22:37.545 "nvme_io": false, 00:22:37.545 "nvme_io_md": false, 00:22:37.545 "write_zeroes": true, 00:22:37.545 "zcopy": false, 00:22:37.545 "get_zone_info": false, 00:22:37.545 "zone_management": false, 00:22:37.545 "zone_append": false, 00:22:37.545 "compare": false, 00:22:37.545 "compare_and_write": false, 00:22:37.545 "abort": false, 00:22:37.545 "seek_hole": true, 00:22:37.545 "seek_data": true, 00:22:37.545 "copy": false, 00:22:37.545 "nvme_iov_md": false 00:22:37.545 }, 00:22:37.545 "driver_specific": { 00:22:37.545 "lvol": { 00:22:37.545 "lvol_store_uuid": "d910be63-3b61-4c9f-bad5-993166c52464", 00:22:37.545 "base_bdev": "nvme0n1", 00:22:37.545 "thin_provision": true, 00:22:37.545 "num_allocated_clusters": 0, 00:22:37.545 "snapshot": false, 00:22:37.545 "clone": false, 00:22:37.545 "esnap_clone": false 00:22:37.545 } 00:22:37.545 } 00:22:37.545 } 00:22:37.545 ]' 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.545 12:06:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.546 12:06:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:37.546 12:06:27 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7d7feaf4-3d20-41c3-b19a-a3120c164d47 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:37.806 [2024-11-27 12:06:27.674481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.806 [2024-11-27 12:06:27.674685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:37.806 [2024-11-27 12:06:27.674717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:37.807 [2024-11-27 12:06:27.674729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.678168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.678308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.807 [2024-11-27 12:06:27.678335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.376 ms 00:22:37.807 [2024-11-27 12:06:27.678351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.678665] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:37.807 [2024-11-27 12:06:27.679622] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:37.807 [2024-11-27 12:06:27.679660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.679673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.807 [2024-11-27 12:06:27.679686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:22:37.807 [2024-11-27 12:06:27.679696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.679798] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:22:37.807 [2024-11-27 12:06:27.681257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.681294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:37.807 [2024-11-27 12:06:27.681307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:37.807 [2024-11-27 12:06:27.681319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.689086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.689252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.807 [2024-11-27 12:06:27.689271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.526 ms 00:22:37.807 [2024-11-27 12:06:27.689287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.689495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.689515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.807 [2024-11-27 12:06:27.689527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:37.807 [2024-11-27 12:06:27.689543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.689610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.689624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:37.807 [2024-11-27 12:06:27.689635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:37.807 [2024-11-27 12:06:27.689651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.689710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:37.807 [2024-11-27 12:06:27.694712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.694743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.807 [2024-11-27 12:06:27.694760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.013 ms 00:22:37.807 [2024-11-27 12:06:27.694787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.694901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.694931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:37.807 [2024-11-27 12:06:27.694945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:37.807 [2024-11-27 12:06:27.694955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.695019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:37.807 [2024-11-27 12:06:27.695142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:37.807 [2024-11-27 12:06:27.695161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:37.807 [2024-11-27 12:06:27.695174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:37.807 [2024-11-27 12:06:27.695190] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:37.807 [2024-11-27 12:06:27.695227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:37.807 [2024-11-27 12:06:27.695242] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:37.807 [2024-11-27 12:06:27.695251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:37.807 [2024-11-27 12:06:27.695264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.695274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:37.807 [2024-11-27 12:06:27.695287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:37.807 [2024-11-27 12:06:27.695297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.695446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.807 [2024-11-27 12:06:27.695459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:37.807 [2024-11-27 12:06:27.695472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:37.807 [2024-11-27 12:06:27.695482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.807 [2024-11-27 12:06:27.695669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:37.807 [2024-11-27 12:06:27.695681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:37.807 [2024-11-27 12:06:27.695694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:37.807 [2024-11-27 12:06:27.695727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:37.807 [2024-11-27 12:06:27.695760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.807 [2024-11-27 12:06:27.695782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:37.807 [2024-11-27 12:06:27.695791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:37.807 [2024-11-27 12:06:27.695803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.807 [2024-11-27 12:06:27.695812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:37.807 [2024-11-27 12:06:27.695824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:37.807 [2024-11-27 12:06:27.695833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:37.807 [2024-11-27 12:06:27.695858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:37.807 [2024-11-27 12:06:27.695893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:37.807 [2024-11-27 12:06:27.695923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:37.807 [2024-11-27 12:06:27.695955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.807 [2024-11-27 12:06:27.695976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:37.807 [2024-11-27 12:06:27.695985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:37.807 [2024-11-27 12:06:27.695997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.807 [2024-11-27 12:06:27.696006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:37.807 [2024-11-27 12:06:27.696020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:37.807 [2024-11-27 12:06:27.696029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.807 [2024-11-27 12:06:27.696041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:37.807 [2024-11-27 12:06:27.696051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:37.807 [2024-11-27 12:06:27.696062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.807 [2024-11-27 12:06:27.696072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:37.807 [2024-11-27 12:06:27.696084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:37.807 [2024-11-27 12:06:27.696093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.696105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:37.807 [2024-11-27 12:06:27.696113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:37.807 [2024-11-27 12:06:27.696125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.696133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:37.807 [2024-11-27 12:06:27.696146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:37.807 [2024-11-27 12:06:27.696156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.807 [2024-11-27 12:06:27.696168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.807 [2024-11-27 12:06:27.696178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:37.808 [2024-11-27 12:06:27.696194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:37.808 [2024-11-27 12:06:27.696205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:37.808 [2024-11-27 12:06:27.696217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:37.808 [2024-11-27 12:06:27.696227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:37.808 [2024-11-27 12:06:27.696238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:37.808 [2024-11-27 12:06:27.696252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:37.808 [2024-11-27 12:06:27.696268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:37.808 [2024-11-27 12:06:27.696299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:37.808 [2024-11-27 12:06:27.696310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:37.808 [2024-11-27 12:06:27.696322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:37.808 [2024-11-27 12:06:27.696333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:37.808 [2024-11-27 12:06:27.696346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:37.808 [2024-11-27 12:06:27.696366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:37.808 [2024-11-27 12:06:27.696380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:37.808 [2024-11-27 12:06:27.696390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:37.808 [2024-11-27 12:06:27.696405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:37.808 [2024-11-27 12:06:27.696462] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:37.808 [2024-11-27 12:06:27.696476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.808 [2024-11-27 12:06:27.696500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:37.808 [2024-11-27 12:06:27.696510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:37.808 [2024-11-27 12:06:27.696523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:37.808 [2024-11-27 12:06:27.696534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.808 [2024-11-27 12:06:27.696547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:37.808 [2024-11-27 12:06:27.696557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:22:37.808 [2024-11-27 12:06:27.696570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.808 [2024-11-27 12:06:27.696738] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:37.808 [2024-11-27 12:06:27.696756] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:41.101 [2024-11-27 12:06:31.147264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.101 [2024-11-27 12:06:31.147335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:41.101 [2024-11-27 12:06:31.147369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3456.125 ms 00:22:41.101 [2024-11-27 12:06:31.147406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.185308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.185599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.360 [2024-11-27 12:06:31.185625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.575 ms 00:22:41.360 [2024-11-27 12:06:31.185639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.185815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.185833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:41.360 [2024-11-27 12:06:31.185864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:41.360 [2024-11-27 12:06:31.185884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.247797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.247841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.360 [2024-11-27 12:06:31.247856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.951 ms 00:22:41.360 [2024-11-27 12:06:31.247870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.248006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.248022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.360 [2024-11-27 12:06:31.248034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:41.360 [2024-11-27 12:06:31.248046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.248575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.248606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.360 [2024-11-27 12:06:31.248617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:22:41.360 [2024-11-27 12:06:31.248630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.248758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.248773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.360 [2024-11-27 12:06:31.248801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:41.360 [2024-11-27 12:06:31.248817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.270221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.270458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.360 [2024-11-27 12:06:31.270482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.364 ms 00:22:41.360 [2024-11-27 12:06:31.270496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.282946] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:41.360 [2024-11-27 12:06:31.299488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.299529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:41.360 [2024-11-27 12:06:31.299546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.850 ms 00:22:41.360 [2024-11-27 12:06:31.299556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.398400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.398459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:41.360 [2024-11-27 12:06:31.398479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.861 ms 00:22:41.360 [2024-11-27 12:06:31.398490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.360 [2024-11-27 12:06:31.398756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.360 [2024-11-27 12:06:31.398770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:41.360 [2024-11-27 12:06:31.398788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:22:41.360 [2024-11-27 12:06:31.398799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.619 [2024-11-27 12:06:31.434574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.619 [2024-11-27 12:06:31.434613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:41.619 [2024-11-27 12:06:31.434630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.776 ms 00:22:41.619 [2024-11-27 12:06:31.434660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.619 [2024-11-27 12:06:31.469594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.619 [2024-11-27 12:06:31.469629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:41.620 [2024-11-27 12:06:31.469646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.871 ms 00:22:41.620 [2024-11-27 12:06:31.469672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.620 [2024-11-27 12:06:31.470678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.620 [2024-11-27 12:06:31.470711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:41.620 [2024-11-27 12:06:31.470726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:22:41.620 [2024-11-27 12:06:31.470736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.620 [2024-11-27 12:06:31.579473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.620 [2024-11-27 12:06:31.579651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:41.620 [2024-11-27 12:06:31.579683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.841 ms 00:22:41.620 [2024-11-27 12:06:31.579694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.620 [2024-11-27 12:06:31.618199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.620 [2024-11-27 12:06:31.618352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:41.620 [2024-11-27 12:06:31.618391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.398 ms 00:22:41.620 [2024-11-27 12:06:31.618402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.620 [2024-11-27 12:06:31.655517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.620 [2024-11-27 12:06:31.655554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:41.620 [2024-11-27 12:06:31.655571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.047 ms 00:22:41.620 [2024-11-27 12:06:31.655581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.879 [2024-11-27 12:06:31.691479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.879 [2024-11-27 12:06:31.691655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:41.879 [2024-11-27 12:06:31.691680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.825 ms 00:22:41.879 [2024-11-27 12:06:31.691691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.879 [2024-11-27 12:06:31.691808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.879 [2024-11-27 12:06:31.691822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:41.879 [2024-11-27 12:06:31.691840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:41.879 [2024-11-27 12:06:31.691851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.879 [2024-11-27 12:06:31.691958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.879 [2024-11-27 12:06:31.691969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:41.879 [2024-11-27 12:06:31.691982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:41.879 [2024-11-27 12:06:31.691995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.879 [2024-11-27 12:06:31.693088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:41.879 [2024-11-27 12:06:31.697349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4024.818 ms, result 0 00:22:41.879 [2024-11-27 12:06:31.698471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:41.879 { 00:22:41.879 "name": "ftl0", 00:22:41.879 "uuid": "ac0f0c90-8a1c-488f-b0e9-47cb15d830e6" 00:22:41.879 } 00:22:41.879 12:06:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:41.879 12:06:31 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:42.138 [ 00:22:42.138 { 00:22:42.138 "name": "ftl0", 00:22:42.138 "aliases": [ 00:22:42.138 "ac0f0c90-8a1c-488f-b0e9-47cb15d830e6" 00:22:42.138 ], 00:22:42.138 "product_name": "FTL disk", 00:22:42.138 "block_size": 4096, 00:22:42.138 "num_blocks": 23592960, 00:22:42.138 "uuid": "ac0f0c90-8a1c-488f-b0e9-47cb15d830e6", 00:22:42.138 "assigned_rate_limits": { 00:22:42.138 "rw_ios_per_sec": 0, 00:22:42.138 "rw_mbytes_per_sec": 0, 00:22:42.138 "r_mbytes_per_sec": 0, 00:22:42.138 "w_mbytes_per_sec": 0 00:22:42.138 }, 00:22:42.138 "claimed": false, 00:22:42.138 "zoned": false, 00:22:42.138 "supported_io_types": { 00:22:42.138 "read": true, 00:22:42.138 "write": true, 00:22:42.138 "unmap": true, 00:22:42.138 "flush": true, 00:22:42.138 "reset": false, 00:22:42.138 "nvme_admin": false, 00:22:42.138 "nvme_io": false, 00:22:42.138 "nvme_io_md": false, 00:22:42.138 "write_zeroes": true, 00:22:42.138 "zcopy": false, 00:22:42.138 "get_zone_info": false, 00:22:42.138 "zone_management": false, 00:22:42.138 "zone_append": false, 00:22:42.138 "compare": false, 00:22:42.138 "compare_and_write": false, 00:22:42.138 "abort": false, 00:22:42.138 "seek_hole": false, 00:22:42.138 "seek_data": false, 00:22:42.138 "copy": false, 00:22:42.138 "nvme_iov_md": false 00:22:42.138 }, 00:22:42.138 "driver_specific": { 00:22:42.138 "ftl": { 00:22:42.138 "base_bdev": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:42.138 "cache": "nvc0n1p0" 00:22:42.138 } 00:22:42.139 } 00:22:42.139 } 00:22:42.139 ] 00:22:42.139 12:06:32 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:42.139 12:06:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:42.139 12:06:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:42.398 12:06:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:42.398 12:06:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:42.657 12:06:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:42.657 { 00:22:42.657 "name": "ftl0", 00:22:42.657 "aliases": [ 00:22:42.657 "ac0f0c90-8a1c-488f-b0e9-47cb15d830e6" 00:22:42.657 ], 00:22:42.657 "product_name": "FTL disk", 00:22:42.657 "block_size": 4096, 00:22:42.657 "num_blocks": 23592960, 00:22:42.657 "uuid": "ac0f0c90-8a1c-488f-b0e9-47cb15d830e6", 00:22:42.657 "assigned_rate_limits": { 00:22:42.657 "rw_ios_per_sec": 0, 00:22:42.657 "rw_mbytes_per_sec": 0, 00:22:42.657 "r_mbytes_per_sec": 0, 00:22:42.657 "w_mbytes_per_sec": 0 00:22:42.657 }, 00:22:42.657 "claimed": false, 00:22:42.657 "zoned": false, 00:22:42.657 "supported_io_types": { 00:22:42.657 "read": true, 00:22:42.657 "write": true, 00:22:42.657 "unmap": true, 00:22:42.657 "flush": true, 00:22:42.657 "reset": false, 00:22:42.657 "nvme_admin": false, 00:22:42.657 "nvme_io": false, 00:22:42.657 "nvme_io_md": false, 00:22:42.657 "write_zeroes": true, 00:22:42.657 "zcopy": false, 00:22:42.657 "get_zone_info": false, 00:22:42.657 "zone_management": false, 00:22:42.657 "zone_append": false, 00:22:42.657 "compare": false, 00:22:42.657 "compare_and_write": false, 00:22:42.657 "abort": false, 00:22:42.657 "seek_hole": false, 00:22:42.657 "seek_data": false, 00:22:42.657 "copy": false, 00:22:42.657 "nvme_iov_md": false 00:22:42.657 }, 00:22:42.657 "driver_specific": { 00:22:42.657 "ftl": { 00:22:42.657 "base_bdev": "7d7feaf4-3d20-41c3-b19a-a3120c164d47", 00:22:42.657 "cache": "nvc0n1p0" 00:22:42.657 } 00:22:42.657 } 00:22:42.657 } 00:22:42.657 ]' 00:22:42.657 12:06:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:42.657 12:06:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:42.657 12:06:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:42.917 [2024-11-27 12:06:32.744669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.744863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.918 [2024-11-27 12:06:32.745007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:42.918 [2024-11-27 12:06:32.745050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.745142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:42.918 [2024-11-27 12:06:32.749673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.749815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.918 [2024-11-27 12:06:32.749843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:22:42.918 [2024-11-27 12:06:32.749853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.750958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.750981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.918 [2024-11-27 12:06:32.751006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:22:42.918 [2024-11-27 12:06:32.751016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.753868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.753890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.918 [2024-11-27 12:06:32.753904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:22:42.918 [2024-11-27 12:06:32.753914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.759580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.759611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.918 [2024-11-27 12:06:32.759626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.570 ms 00:22:42.918 [2024-11-27 12:06:32.759652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.796231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.796269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.918 [2024-11-27 12:06:32.796289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.485 ms 00:22:42.918 [2024-11-27 12:06:32.796316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.818655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.818695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.918 [2024-11-27 12:06:32.818715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.244 ms 00:22:42.918 [2024-11-27 12:06:32.818726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.819045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.819060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.918 [2024-11-27 12:06:32.819074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:22:42.918 [2024-11-27 12:06:32.819084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.856176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.856213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.918 [2024-11-27 12:06:32.856229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.096 ms 00:22:42.918 [2024-11-27 12:06:32.856240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.892861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.892899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.918 [2024-11-27 12:06:32.892919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.561 ms 00:22:42.918 [2024-11-27 12:06:32.892929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.929202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.929238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.918 [2024-11-27 12:06:32.929254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.216 ms 00:22:42.918 [2024-11-27 12:06:32.929264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.963969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.918 [2024-11-27 12:06:32.964021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.918 [2024-11-27 12:06:32.964038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.503 ms 00:22:42.918 [2024-11-27 12:06:32.964048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.918 [2024-11-27 12:06:32.964167] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.918 [2024-11-27 12:06:32.964186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.918 [2024-11-27 12:06:32.964838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.964993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.919 [2024-11-27 12:06:32.965508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.919 [2024-11-27 12:06:32.965524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:22:42.919 [2024-11-27 12:06:32.965535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:42.919 [2024-11-27 12:06:32.965547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:42.919 [2024-11-27 12:06:32.965560] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:42.919 [2024-11-27 12:06:32.965573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:42.919 [2024-11-27 12:06:32.965602] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.919 [2024-11-27 12:06:32.965615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.919 [2024-11-27 12:06:32.965625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.919 [2024-11-27 12:06:32.965636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.919 [2024-11-27 12:06:32.965645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.919 [2024-11-27 12:06:32.965657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.919 [2024-11-27 12:06:32.965668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.919 [2024-11-27 12:06:32.965682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:22:42.919 [2024-11-27 12:06:32.965691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.178 [2024-11-27 12:06:32.985560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.178 [2024-11-27 12:06:32.985594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:43.178 [2024-11-27 12:06:32.985612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.838 ms 00:22:43.179 [2024-11-27 12:06:32.985638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:32.986244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.179 [2024-11-27 12:06:32.986261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:43.179 [2024-11-27 12:06:32.986275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:22:43.179 [2024-11-27 12:06:32.986285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:33.054604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.179 [2024-11-27 12:06:33.054639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:43.179 [2024-11-27 12:06:33.054654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.179 [2024-11-27 12:06:33.054664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:33.054786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.179 [2024-11-27 12:06:33.054799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:43.179 [2024-11-27 12:06:33.054812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.179 [2024-11-27 12:06:33.054822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:33.054917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.179 [2024-11-27 12:06:33.054932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:43.179 [2024-11-27 12:06:33.054948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.179 [2024-11-27 12:06:33.054958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:33.055016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.179 [2024-11-27 12:06:33.055027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:43.179 [2024-11-27 12:06:33.055040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.179 [2024-11-27 12:06:33.055050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.179 [2024-11-27 12:06:33.181982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.179 [2024-11-27 12:06:33.182037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:43.179 [2024-11-27 12:06:33.182053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.179 [2024-11-27 12:06:33.182064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:43.438 [2024-11-27 12:06:33.280097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:43.438 [2024-11-27 12:06:33.280304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:43.438 [2024-11-27 12:06:33.280469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:43.438 [2024-11-27 12:06:33.280666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:43.438 [2024-11-27 12:06:33.280796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.280897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.280909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:43.438 [2024-11-27 12:06:33.280924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.280937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.281025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.438 [2024-11-27 12:06:33.281037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:43.438 [2024-11-27 12:06:33.281050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.438 [2024-11-27 12:06:33.281059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.438 [2024-11-27 12:06:33.281322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.506 ms, result 0 00:22:43.438 true 00:22:43.438 12:06:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78121 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78121 ']' 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78121 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78121 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.438 killing process with pid 78121 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78121' 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78121 00:22:43.438 12:06:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78121 00:22:48.719 12:06:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:49.287 65536+0 records in 00:22:49.287 65536+0 records out 00:22:49.287 268435456 bytes (268 MB, 256 MiB) copied, 0.973246 s, 276 MB/s 00:22:49.287 12:06:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:49.287 [2024-11-27 12:06:39.256026] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:49.287 [2024-11-27 12:06:39.256171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78321 ] 00:22:49.547 [2024-11-27 12:06:39.438725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.547 [2024-11-27 12:06:39.543966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.116 [2024-11-27 12:06:39.891745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.116 [2024-11-27 12:06:39.891809] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.116 [2024-11-27 12:06:40.053416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.053457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.116 [2024-11-27 12:06:40.053472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:50.116 [2024-11-27 12:06:40.053493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.116 [2024-11-27 12:06:40.056650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.056686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.116 [2024-11-27 12:06:40.056709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:22:50.116 [2024-11-27 12:06:40.056719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.116 [2024-11-27 12:06:40.056826] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.116 [2024-11-27 12:06:40.057907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.116 [2024-11-27 12:06:40.057940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.057952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.116 [2024-11-27 12:06:40.057963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:22:50.116 [2024-11-27 12:06:40.057973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.116 [2024-11-27 12:06:40.059433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:50.116 [2024-11-27 12:06:40.077952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.077985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:50.116 [2024-11-27 12:06:40.077999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.549 ms 00:22:50.116 [2024-11-27 12:06:40.078025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.116 [2024-11-27 12:06:40.078121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.078135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:50.116 [2024-11-27 12:06:40.078147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:50.116 [2024-11-27 12:06:40.078156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.116 [2024-11-27 12:06:40.084756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.116 [2024-11-27 12:06:40.084780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.116 [2024-11-27 12:06:40.084791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:22:50.117 [2024-11-27 12:06:40.084800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.084901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.084915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.117 [2024-11-27 12:06:40.084925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:50.117 [2024-11-27 12:06:40.084935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.084964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.084975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.117 [2024-11-27 12:06:40.084984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:50.117 [2024-11-27 12:06:40.084994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.085015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:50.117 [2024-11-27 12:06:40.089544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.089574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.117 [2024-11-27 12:06:40.089585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:22:50.117 [2024-11-27 12:06:40.089594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.089655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.089667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.117 [2024-11-27 12:06:40.089677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:50.117 [2024-11-27 12:06:40.089686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.089708] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:50.117 [2024-11-27 12:06:40.089735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:50.117 [2024-11-27 12:06:40.089784] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:50.117 [2024-11-27 12:06:40.089801] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:50.117 [2024-11-27 12:06:40.089887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.117 [2024-11-27 12:06:40.089900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.117 [2024-11-27 12:06:40.089913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.117 [2024-11-27 12:06:40.089929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.117 [2024-11-27 12:06:40.089940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.117 [2024-11-27 12:06:40.089952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:50.117 [2024-11-27 12:06:40.089962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.117 [2024-11-27 12:06:40.089972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.117 [2024-11-27 12:06:40.089982] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.117 [2024-11-27 12:06:40.089992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.090002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.117 [2024-11-27 12:06:40.090013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:22:50.117 [2024-11-27 12:06:40.090022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.090097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.117 [2024-11-27 12:06:40.090111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.117 [2024-11-27 12:06:40.090121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:50.117 [2024-11-27 12:06:40.090131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.117 [2024-11-27 12:06:40.090219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.117 [2024-11-27 12:06:40.090231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.117 [2024-11-27 12:06:40.090243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.117 [2024-11-27 12:06:40.090273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.117 [2024-11-27 12:06:40.090301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.117 [2024-11-27 12:06:40.090320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.117 [2024-11-27 12:06:40.090341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:50.117 [2024-11-27 12:06:40.090350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.117 [2024-11-27 12:06:40.090359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.117 [2024-11-27 12:06:40.090368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:50.117 [2024-11-27 12:06:40.090397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.117 [2024-11-27 12:06:40.090432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.117 [2024-11-27 12:06:40.090461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.117 [2024-11-27 12:06:40.090490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.117 [2024-11-27 12:06:40.090517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.117 [2024-11-27 12:06:40.090544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.117 [2024-11-27 12:06:40.090562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.117 [2024-11-27 12:06:40.090572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:50.117 [2024-11-27 12:06:40.090581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.117 [2024-11-27 12:06:40.090589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.117 [2024-11-27 12:06:40.090605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:50.117 [2024-11-27 12:06:40.090614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.117 [2024-11-27 12:06:40.090623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.117 [2024-11-27 12:06:40.090632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:50.117 [2024-11-27 12:06:40.090641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.118 [2024-11-27 12:06:40.090649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.118 [2024-11-27 12:06:40.090658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:50.118 [2024-11-27 12:06:40.090668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.118 [2024-11-27 12:06:40.090677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.118 [2024-11-27 12:06:40.090687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.118 [2024-11-27 12:06:40.090701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.118 [2024-11-27 12:06:40.090710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.118 [2024-11-27 12:06:40.090720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.118 [2024-11-27 12:06:40.090730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.118 [2024-11-27 12:06:40.090739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.118 [2024-11-27 12:06:40.090748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.118 [2024-11-27 12:06:40.090757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.118 [2024-11-27 12:06:40.090767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.118 [2024-11-27 12:06:40.090777] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.118 [2024-11-27 12:06:40.090789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:50.118 [2024-11-27 12:06:40.090810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:50.118 [2024-11-27 12:06:40.090820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:50.118 [2024-11-27 12:06:40.090830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:50.118 [2024-11-27 12:06:40.090840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:50.118 [2024-11-27 12:06:40.090850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:50.118 [2024-11-27 12:06:40.090860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:50.118 [2024-11-27 12:06:40.090870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:50.118 [2024-11-27 12:06:40.090880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:50.118 [2024-11-27 12:06:40.090890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:50.118 [2024-11-27 12:06:40.090940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.118 [2024-11-27 12:06:40.090952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.118 [2024-11-27 12:06:40.090972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.118 [2024-11-27 12:06:40.090984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.118 [2024-11-27 12:06:40.090996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.118 [2024-11-27 12:06:40.091011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.118 [2024-11-27 12:06:40.091025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.118 [2024-11-27 12:06:40.091035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:22:50.118 [2024-11-27 12:06:40.091044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.118 [2024-11-27 12:06:40.128200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.118 [2024-11-27 12:06:40.128233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.118 [2024-11-27 12:06:40.128246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.158 ms 00:22:50.118 [2024-11-27 12:06:40.128255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.118 [2024-11-27 12:06:40.128383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.118 [2024-11-27 12:06:40.128413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:50.118 [2024-11-27 12:06:40.128424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:50.118 [2024-11-27 12:06:40.128434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.183191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.183223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.378 [2024-11-27 12:06:40.183239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.822 ms 00:22:50.378 [2024-11-27 12:06:40.183250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.183336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.183349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.378 [2024-11-27 12:06:40.183372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.378 [2024-11-27 12:06:40.183382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.183842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.183855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.378 [2024-11-27 12:06:40.183872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:22:50.378 [2024-11-27 12:06:40.183883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.183999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.184018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.378 [2024-11-27 12:06:40.184029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:50.378 [2024-11-27 12:06:40.184039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.203598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.203628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.378 [2024-11-27 12:06:40.203642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.568 ms 00:22:50.378 [2024-11-27 12:06:40.203651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.222250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:50.378 [2024-11-27 12:06:40.222307] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:50.378 [2024-11-27 12:06:40.222323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.222333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:50.378 [2024-11-27 12:06:40.222344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.608 ms 00:22:50.378 [2024-11-27 12:06:40.222353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.249953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.249988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:50.378 [2024-11-27 12:06:40.250001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.542 ms 00:22:50.378 [2024-11-27 12:06:40.250027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.267335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.267386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:50.378 [2024-11-27 12:06:40.267399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.259 ms 00:22:50.378 [2024-11-27 12:06:40.267409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.284577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.284700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:50.378 [2024-11-27 12:06:40.284736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.124 ms 00:22:50.378 [2024-11-27 12:06:40.284746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.285582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.285609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.378 [2024-11-27 12:06:40.285620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:22:50.378 [2024-11-27 12:06:40.285631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.367961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.368019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:50.378 [2024-11-27 12:06:40.368036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.434 ms 00:22:50.378 [2024-11-27 12:06:40.368063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.378048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:50.378 [2024-11-27 12:06:40.393277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.393317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:50.378 [2024-11-27 12:06:40.393332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.162 ms 00:22:50.378 [2024-11-27 12:06:40.393342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.393502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.393516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:50.378 [2024-11-27 12:06:40.393527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.378 [2024-11-27 12:06:40.393537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.393592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.393603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.378 [2024-11-27 12:06:40.393613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:50.378 [2024-11-27 12:06:40.393623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.393657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.393673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.378 [2024-11-27 12:06:40.393684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:50.378 [2024-11-27 12:06:40.393693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.378 [2024-11-27 12:06:40.393741] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:50.378 [2024-11-27 12:06:40.393754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.378 [2024-11-27 12:06:40.393779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:50.378 [2024-11-27 12:06:40.393790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:50.378 [2024-11-27 12:06:40.393799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.637 [2024-11-27 12:06:40.428658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.637 [2024-11-27 12:06:40.428711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:50.637 [2024-11-27 12:06:40.428726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.893 ms 00:22:50.637 [2024-11-27 12:06:40.428737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.638 [2024-11-27 12:06:40.428851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.638 [2024-11-27 12:06:40.428865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:50.638 [2024-11-27 12:06:40.428876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:50.638 [2024-11-27 12:06:40.428886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.638 [2024-11-27 12:06:40.429948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.638 [2024-11-27 12:06:40.434277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.840 ms, result 0 00:22:50.638 [2024-11-27 12:06:40.435072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:50.638 [2024-11-27 12:06:40.453696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:51.574  [2024-11-27T12:06:42.563Z] Copying: 24/256 [MB] (24 MBps) [2024-11-27T12:06:43.502Z] Copying: 47/256 [MB] (23 MBps) [2024-11-27T12:06:44.881Z] Copying: 72/256 [MB] (24 MBps) [2024-11-27T12:06:45.820Z] Copying: 96/256 [MB] (24 MBps) [2024-11-27T12:06:46.757Z] Copying: 120/256 [MB] (24 MBps) [2024-11-27T12:06:47.697Z] Copying: 144/256 [MB] (23 MBps) [2024-11-27T12:06:48.635Z] Copying: 166/256 [MB] (22 MBps) [2024-11-27T12:06:49.573Z] Copying: 189/256 [MB] (22 MBps) [2024-11-27T12:06:50.512Z] Copying: 213/256 [MB] (23 MBps) [2024-11-27T12:06:51.453Z] Copying: 237/256 [MB] (23 MBps) [2024-11-27T12:06:51.453Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-27 12:06:51.217269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:01.400 [2024-11-27 12:06:51.231376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.231542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:01.400 [2024-11-27 12:06:51.231675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:01.400 [2024-11-27 12:06:51.231720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.231775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:01.400 [2024-11-27 12:06:51.235858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.235972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:01.400 [2024-11-27 12:06:51.236113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.969 ms 00:23:01.400 [2024-11-27 12:06:51.236129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.237981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.238116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:01.400 [2024-11-27 12:06:51.238136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:23:01.400 [2024-11-27 12:06:51.238146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.244982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.245148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:01.400 [2024-11-27 12:06:51.245169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.822 ms 00:23:01.400 [2024-11-27 12:06:51.245179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.250648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.250677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:01.400 [2024-11-27 12:06:51.250689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:23:01.400 [2024-11-27 12:06:51.250697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.285277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.285449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:01.400 [2024-11-27 12:06:51.285534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.575 ms 00:23:01.400 [2024-11-27 12:06:51.285570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.305740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.305886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:01.400 [2024-11-27 12:06:51.305979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.107 ms 00:23:01.400 [2024-11-27 12:06:51.306015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.306161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.306202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:01.400 [2024-11-27 12:06:51.306233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:01.400 [2024-11-27 12:06:51.306324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.341378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.341531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:01.400 [2024-11-27 12:06:51.341650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.045 ms 00:23:01.400 [2024-11-27 12:06:51.341667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.376476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.376620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:01.400 [2024-11-27 12:06:51.376640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.812 ms 00:23:01.400 [2024-11-27 12:06:51.376650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.412189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.412225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:01.400 [2024-11-27 12:06:51.412238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.522 ms 00:23:01.400 [2024-11-27 12:06:51.412247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.446132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.400 [2024-11-27 12:06:51.446167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:01.400 [2024-11-27 12:06:51.446181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.860 ms 00:23:01.400 [2024-11-27 12:06:51.446190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.400 [2024-11-27 12:06:51.446250] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:01.400 [2024-11-27 12:06:51.446268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:01.400 [2024-11-27 12:06:51.446493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.446994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:01.401 [2024-11-27 12:06:51.447372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:01.401 [2024-11-27 12:06:51.447382] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:01.401 [2024-11-27 12:06:51.447393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:01.401 [2024-11-27 12:06:51.447402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:01.401 [2024-11-27 12:06:51.447412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:01.401 [2024-11-27 12:06:51.447422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:01.401 [2024-11-27 12:06:51.447431] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:01.401 [2024-11-27 12:06:51.447442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:01.401 [2024-11-27 12:06:51.447451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:01.401 [2024-11-27 12:06:51.447460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:01.401 [2024-11-27 12:06:51.447469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:01.402 [2024-11-27 12:06:51.447479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.402 [2024-11-27 12:06:51.447493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:01.402 [2024-11-27 12:06:51.447503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:23:01.402 [2024-11-27 12:06:51.447513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.467329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.661 [2024-11-27 12:06:51.467373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:01.661 [2024-11-27 12:06:51.467385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.827 ms 00:23:01.661 [2024-11-27 12:06:51.467395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.467992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.661 [2024-11-27 12:06:51.468008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:01.661 [2024-11-27 12:06:51.468019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:23:01.661 [2024-11-27 12:06:51.468029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.519486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.661 [2024-11-27 12:06:51.519520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.661 [2024-11-27 12:06:51.519532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.661 [2024-11-27 12:06:51.519542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.519631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.661 [2024-11-27 12:06:51.519642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.661 [2024-11-27 12:06:51.519653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.661 [2024-11-27 12:06:51.519661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.519710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.661 [2024-11-27 12:06:51.519723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.661 [2024-11-27 12:06:51.519733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.661 [2024-11-27 12:06:51.519742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.519759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.661 [2024-11-27 12:06:51.519773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.661 [2024-11-27 12:06:51.519782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.661 [2024-11-27 12:06:51.519791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.661 [2024-11-27 12:06:51.635053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:01.661 [2024-11-27 12:06:51.635105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.661 [2024-11-27 12:06:51.635119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:01.661 [2024-11-27 12:06:51.635130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.732520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.732566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.008 [2024-11-27 12:06:51.732581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.732591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.732655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.732666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.008 [2024-11-27 12:06:51.732677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.732687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.732717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.732727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.008 [2024-11-27 12:06:51.732744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.732754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.732869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.732883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.008 [2024-11-27 12:06:51.732893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.732903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.732940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.732952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.008 [2024-11-27 12:06:51.732963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.732977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.733020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.733031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.008 [2024-11-27 12:06:51.733041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.733051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.733094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.008 [2024-11-27 12:06:51.733106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.008 [2024-11-27 12:06:51.733120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.008 [2024-11-27 12:06:51.733130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.008 [2024-11-27 12:06:51.733267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.702 ms, result 0 00:23:02.946 00:23:02.946 00:23:02.946 12:06:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78462 00:23:02.946 12:06:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:02.946 12:06:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78462 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78462 ']' 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.946 12:06:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:02.946 [2024-11-27 12:06:52.993798] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:02.946 [2024-11-27 12:06:52.993922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78462 ] 00:23:03.205 [2024-11-27 12:06:53.174558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.464 [2024-11-27 12:06:53.280126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.400 12:06:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.400 12:06:54 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:04.400 12:06:54 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:04.400 [2024-11-27 12:06:54.339593] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:04.400 [2024-11-27 12:06:54.339652] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:04.661 [2024-11-27 12:06:54.520586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.661 [2024-11-27 12:06:54.520632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.662 [2024-11-27 12:06:54.520652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:04.662 [2024-11-27 12:06:54.520663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.524345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.524390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.662 [2024-11-27 12:06:54.524405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.666 ms 00:23:04.662 [2024-11-27 12:06:54.524415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.524525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.662 [2024-11-27 12:06:54.525511] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.662 [2024-11-27 12:06:54.525549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.525560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.662 [2024-11-27 12:06:54.525574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:23:04.662 [2024-11-27 12:06:54.525586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.527073] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:04.662 [2024-11-27 12:06:54.546084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.546222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:04.662 [2024-11-27 12:06:54.546314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.045 ms 00:23:04.662 [2024-11-27 12:06:54.546370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.546536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.546598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:04.662 [2024-11-27 12:06:54.546699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:04.662 [2024-11-27 12:06:54.546742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.553641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.553779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.662 [2024-11-27 12:06:54.553908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.829 ms 00:23:04.662 [2024-11-27 12:06:54.553952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.554146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.554199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.662 [2024-11-27 12:06:54.554295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:04.662 [2024-11-27 12:06:54.554344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.554420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.554524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.662 [2024-11-27 12:06:54.554567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:04.662 [2024-11-27 12:06:54.554605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.554706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:04.662 [2024-11-27 12:06:54.559727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.559853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.662 [2024-11-27 12:06:54.559939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:23:04.662 [2024-11-27 12:06:54.559979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.560060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.560076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.662 [2024-11-27 12:06:54.560095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:04.662 [2024-11-27 12:06:54.560105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.560130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:04.662 [2024-11-27 12:06:54.560151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:04.662 [2024-11-27 12:06:54.560197] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:04.662 [2024-11-27 12:06:54.560217] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:04.662 [2024-11-27 12:06:54.560309] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.662 [2024-11-27 12:06:54.560326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.662 [2024-11-27 12:06:54.560347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:04.662 [2024-11-27 12:06:54.560371] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560386] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:04.662 [2024-11-27 12:06:54.560410] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.662 [2024-11-27 12:06:54.560420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.662 [2024-11-27 12:06:54.560434] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.662 [2024-11-27 12:06:54.560445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.560458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.662 [2024-11-27 12:06:54.560469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:23:04.662 [2024-11-27 12:06:54.560484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.560560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.662 [2024-11-27 12:06:54.560573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.662 [2024-11-27 12:06:54.560584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:04.662 [2024-11-27 12:06:54.560597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.662 [2024-11-27 12:06:54.560686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.662 [2024-11-27 12:06:54.560700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.662 [2024-11-27 12:06:54.560711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.662 [2024-11-27 12:06:54.560747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.662 [2024-11-27 12:06:54.560782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.662 [2024-11-27 12:06:54.560803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.662 [2024-11-27 12:06:54.560815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:04.662 [2024-11-27 12:06:54.560824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.662 [2024-11-27 12:06:54.560836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.662 [2024-11-27 12:06:54.560845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:04.662 [2024-11-27 12:06:54.560857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.662 [2024-11-27 12:06:54.560879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.662 [2024-11-27 12:06:54.560919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.662 [2024-11-27 12:06:54.560955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.662 [2024-11-27 12:06:54.560976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.662 [2024-11-27 12:06:54.560985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:04.662 [2024-11-27 12:06:54.560997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.662 [2024-11-27 12:06:54.561006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.662 [2024-11-27 12:06:54.561018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:04.662 [2024-11-27 12:06:54.561027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.662 [2024-11-27 12:06:54.561039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.662 [2024-11-27 12:06:54.561048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:04.662 [2024-11-27 12:06:54.561061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.662 [2024-11-27 12:06:54.561070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.662 [2024-11-27 12:06:54.561082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:04.662 [2024-11-27 12:06:54.561091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.662 [2024-11-27 12:06:54.561103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.662 [2024-11-27 12:06:54.561112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:04.662 [2024-11-27 12:06:54.561126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.662 [2024-11-27 12:06:54.561136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.663 [2024-11-27 12:06:54.561147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:04.663 [2024-11-27 12:06:54.561157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.663 [2024-11-27 12:06:54.561168] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.663 [2024-11-27 12:06:54.561181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.663 [2024-11-27 12:06:54.561195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.663 [2024-11-27 12:06:54.561205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.663 [2024-11-27 12:06:54.561219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.663 [2024-11-27 12:06:54.561229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.663 [2024-11-27 12:06:54.561241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.663 [2024-11-27 12:06:54.561251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.663 [2024-11-27 12:06:54.561263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.663 [2024-11-27 12:06:54.561273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.663 [2024-11-27 12:06:54.561286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.663 [2024-11-27 12:06:54.561299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:04.663 [2024-11-27 12:06:54.561326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:04.663 [2024-11-27 12:06:54.561341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:04.663 [2024-11-27 12:06:54.561352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:04.663 [2024-11-27 12:06:54.561384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:04.663 [2024-11-27 12:06:54.561395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:04.663 [2024-11-27 12:06:54.561407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:04.663 [2024-11-27 12:06:54.561418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:04.663 [2024-11-27 12:06:54.561431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:04.663 [2024-11-27 12:06:54.561442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:04.663 [2024-11-27 12:06:54.561501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.663 [2024-11-27 12:06:54.561512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.663 [2024-11-27 12:06:54.561551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.663 [2024-11-27 12:06:54.561564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.663 [2024-11-27 12:06:54.561574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.663 [2024-11-27 12:06:54.561588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.561598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.663 [2024-11-27 12:06:54.561611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:23:04.663 [2024-11-27 12:06:54.561624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.600305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.600339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.663 [2024-11-27 12:06:54.600369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.681 ms 00:23:04.663 [2024-11-27 12:06:54.600386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.600559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.600572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:04.663 [2024-11-27 12:06:54.600589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:04.663 [2024-11-27 12:06:54.600599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.648057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.648094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.663 [2024-11-27 12:06:54.648109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.507 ms 00:23:04.663 [2024-11-27 12:06:54.648135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.648227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.648240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.663 [2024-11-27 12:06:54.648253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:04.663 [2024-11-27 12:06:54.648263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.648868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.648975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.663 [2024-11-27 12:06:54.649062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:23:04.663 [2024-11-27 12:06:54.649099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.649247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.649296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.663 [2024-11-27 12:06:54.649312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:04.663 [2024-11-27 12:06:54.649323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.663 [2024-11-27 12:06:54.671344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.663 [2024-11-27 12:06:54.671383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.663 [2024-11-27 12:06:54.671402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.024 ms 00:23:04.663 [2024-11-27 12:06:54.671429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.723222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:04.923 [2024-11-27 12:06:54.723262] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:04.923 [2024-11-27 12:06:54.723290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.723302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:04.923 [2024-11-27 12:06:54.723319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.814 ms 00:23:04.923 [2024-11-27 12:06:54.723342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.751690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.751729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:04.923 [2024-11-27 12:06:54.751749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.284 ms 00:23:04.923 [2024-11-27 12:06:54.751759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.769165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.769292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:04.923 [2024-11-27 12:06:54.769340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.326 ms 00:23:04.923 [2024-11-27 12:06:54.769351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.787007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.787128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:04.923 [2024-11-27 12:06:54.787155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.527 ms 00:23:04.923 [2024-11-27 12:06:54.787182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.787954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.787979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:04.923 [2024-11-27 12:06:54.787996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:23:04.923 [2024-11-27 12:06:54.788007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.870671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.870733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:04.923 [2024-11-27 12:06:54.870754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.761 ms 00:23:04.923 [2024-11-27 12:06:54.870765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.881280] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:04.923 [2024-11-27 12:06:54.896915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.896969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:04.923 [2024-11-27 12:06:54.896984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.075 ms 00:23:04.923 [2024-11-27 12:06:54.896996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.897080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.897095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:04.923 [2024-11-27 12:06:54.897106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.923 [2024-11-27 12:06:54.897117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.897169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.897183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:04.923 [2024-11-27 12:06:54.897193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:04.923 [2024-11-27 12:06:54.897208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.897231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.897244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:04.923 [2024-11-27 12:06:54.897254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.923 [2024-11-27 12:06:54.897265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.897302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:04.923 [2024-11-27 12:06:54.897318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.897330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:04.923 [2024-11-27 12:06:54.897342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:04.923 [2024-11-27 12:06:54.897354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.932063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.932101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:04.923 [2024-11-27 12:06:54.932118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.700 ms 00:23:04.923 [2024-11-27 12:06:54.932128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.932240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.923 [2024-11-27 12:06:54.932252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:04.923 [2024-11-27 12:06:54.932269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:04.923 [2024-11-27 12:06:54.932279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.923 [2024-11-27 12:06:54.933344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.923 [2024-11-27 12:06:54.937446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.072 ms, result 0 00:23:04.923 [2024-11-27 12:06:54.938750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:04.923 Some configs were skipped because the RPC state that can call them passed over. 00:23:05.183 12:06:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:05.183 [2024-11-27 12:06:55.177803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.183 [2024-11-27 12:06:55.177999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:05.183 [2024-11-27 12:06:55.178112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:23:05.183 [2024-11-27 12:06:55.178163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.183 [2024-11-27 12:06:55.178243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.142 ms, result 0 00:23:05.183 true 00:23:05.183 12:06:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:05.442 [2024-11-27 12:06:55.401122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.442 [2024-11-27 12:06:55.401166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:05.442 [2024-11-27 12:06:55.401185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:23:05.442 [2024-11-27 12:06:55.401196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.442 [2024-11-27 12:06:55.401243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.299 ms, result 0 00:23:05.442 true 00:23:05.442 12:06:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78462 00:23:05.442 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78462 ']' 00:23:05.442 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78462 00:23:05.442 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:05.442 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.442 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78462 00:23:05.443 killing process with pid 78462 00:23:05.443 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.443 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.443 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78462' 00:23:05.443 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78462 00:23:05.443 12:06:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78462 00:23:06.824 [2024-11-27 12:06:56.527911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.527974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:06.824 [2024-11-27 12:06:56.527991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.824 [2024-11-27 12:06:56.528004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.528031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:06.824 [2024-11-27 12:06:56.532273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.532308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:06.824 [2024-11-27 12:06:56.532326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:23:06.824 [2024-11-27 12:06:56.532337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.532604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.532622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:06.824 [2024-11-27 12:06:56.532635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:23:06.824 [2024-11-27 12:06:56.532645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.535973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.536012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:06.824 [2024-11-27 12:06:56.536027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:23:06.824 [2024-11-27 12:06:56.536037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.541680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.541715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:06.824 [2024-11-27 12:06:56.541736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.611 ms 00:23:06.824 [2024-11-27 12:06:56.541747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.556272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.556319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:06.824 [2024-11-27 12:06:56.556338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:23:06.824 [2024-11-27 12:06:56.556348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.824 [2024-11-27 12:06:56.566490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.824 [2024-11-27 12:06:56.566657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:06.824 [2024-11-27 12:06:56.566685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.088 ms 00:23:06.824 [2024-11-27 12:06:56.566696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.566835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.825 [2024-11-27 12:06:56.566849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:06.825 [2024-11-27 12:06:56.566863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:06.825 [2024-11-27 12:06:56.566873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.582367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.825 [2024-11-27 12:06:56.582401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:06.825 [2024-11-27 12:06:56.582424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.488 ms 00:23:06.825 [2024-11-27 12:06:56.582435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.596581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.825 [2024-11-27 12:06:56.596733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:06.825 [2024-11-27 12:06:56.596766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.109 ms 00:23:06.825 [2024-11-27 12:06:56.596793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.610679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.825 [2024-11-27 12:06:56.610835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.825 [2024-11-27 12:06:56.610864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:23:06.825 [2024-11-27 12:06:56.610874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.624697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.825 [2024-11-27 12:06:56.624839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.825 [2024-11-27 12:06:56.624862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.764 ms 00:23:06.825 [2024-11-27 12:06:56.624888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.825 [2024-11-27 12:06:56.624989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.825 [2024-11-27 12:06:56.625008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.825 [2024-11-27 12:06:56.625934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.625950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.625961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.625976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.625987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.826 [2024-11-27 12:06:56.626375] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.826 [2024-11-27 12:06:56.626397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:06.826 [2024-11-27 12:06:56.626416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:06.826 [2024-11-27 12:06:56.626431] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:06.826 [2024-11-27 12:06:56.626441] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:06.826 [2024-11-27 12:06:56.626457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:06.826 [2024-11-27 12:06:56.626467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.826 [2024-11-27 12:06:56.626482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.826 [2024-11-27 12:06:56.626492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.826 [2024-11-27 12:06:56.626507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.826 [2024-11-27 12:06:56.626516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.826 [2024-11-27 12:06:56.626531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.826 [2024-11-27 12:06:56.626541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.826 [2024-11-27 12:06:56.626557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:23:06.826 [2024-11-27 12:06:56.626574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.645866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.826 [2024-11-27 12:06:56.645897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.826 [2024-11-27 12:06:56.645919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.290 ms 00:23:06.826 [2024-11-27 12:06:56.645929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.646475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.826 [2024-11-27 12:06:56.646491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.826 [2024-11-27 12:06:56.646513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:23:06.826 [2024-11-27 12:06:56.646523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.713306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.826 [2024-11-27 12:06:56.713339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.826 [2024-11-27 12:06:56.713366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.826 [2024-11-27 12:06:56.713377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.713479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.826 [2024-11-27 12:06:56.713491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.826 [2024-11-27 12:06:56.713513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.826 [2024-11-27 12:06:56.713523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.713578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.826 [2024-11-27 12:06:56.713592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.826 [2024-11-27 12:06:56.713612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.826 [2024-11-27 12:06:56.713623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.713646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.826 [2024-11-27 12:06:56.713658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.826 [2024-11-27 12:06:56.713673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.826 [2024-11-27 12:06:56.713687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.826 [2024-11-27 12:06:56.831652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.826 [2024-11-27 12:06:56.831697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.826 [2024-11-27 12:06:56.831719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.826 [2024-11-27 12:06:56.831730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.931931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.931976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.086 [2024-11-27 12:06:56.931995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.086 [2024-11-27 12:06:56.932142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.086 [2024-11-27 12:06:56.932204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.086 [2024-11-27 12:06:56.932351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.086 [2024-11-27 12:06:56.932465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.086 [2024-11-27 12:06:56.932564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.086 [2024-11-27 12:06:56.932633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.086 [2024-11-27 12:06:56.932645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.086 [2024-11-27 12:06:56.932656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.086 [2024-11-27 12:06:56.932797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.522 ms, result 0 00:23:08.027 12:06:57 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:08.027 12:06:57 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:08.027 [2024-11-27 12:06:58.026532] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:08.027 [2024-11-27 12:06:58.026685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78526 ] 00:23:08.289 [2024-11-27 12:06:58.209294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.289 [2024-11-27 12:06:58.314437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.860 [2024-11-27 12:06:58.675874] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:08.860 [2024-11-27 12:06:58.675931] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:08.860 [2024-11-27 12:06:58.837114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.837160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:08.860 [2024-11-27 12:06:58.837174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:08.860 [2024-11-27 12:06:58.837184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.840388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.840423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.860 [2024-11-27 12:06:58.840435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:23:08.860 [2024-11-27 12:06:58.840445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.840540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:08.860 [2024-11-27 12:06:58.841511] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:08.860 [2024-11-27 12:06:58.841537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.841548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.860 [2024-11-27 12:06:58.841558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:23:08.860 [2024-11-27 12:06:58.841568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.843086] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:08.860 [2024-11-27 12:06:58.861860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.861896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:08.860 [2024-11-27 12:06:58.861909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:23:08.860 [2024-11-27 12:06:58.861920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.862013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.862027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:08.860 [2024-11-27 12:06:58.862038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:08.860 [2024-11-27 12:06:58.862047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.868893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.868919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.860 [2024-11-27 12:06:58.868930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.819 ms 00:23:08.860 [2024-11-27 12:06:58.868940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.869032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.869046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.860 [2024-11-27 12:06:58.869056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:08.860 [2024-11-27 12:06:58.869066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.869095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.869106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:08.860 [2024-11-27 12:06:58.869116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:08.860 [2024-11-27 12:06:58.869125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.869146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:08.860 [2024-11-27 12:06:58.873831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.873862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.860 [2024-11-27 12:06:58.873875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.697 ms 00:23:08.860 [2024-11-27 12:06:58.873885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.873949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.873962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:08.860 [2024-11-27 12:06:58.873973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:08.860 [2024-11-27 12:06:58.873982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.874006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:08.860 [2024-11-27 12:06:58.874026] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:08.860 [2024-11-27 12:06:58.874061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:08.860 [2024-11-27 12:06:58.874078] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:08.860 [2024-11-27 12:06:58.874166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:08.860 [2024-11-27 12:06:58.874179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:08.860 [2024-11-27 12:06:58.874192] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:08.860 [2024-11-27 12:06:58.874207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874219] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874230] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:08.860 [2024-11-27 12:06:58.874239] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:08.860 [2024-11-27 12:06:58.874249] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:08.860 [2024-11-27 12:06:58.874258] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:08.860 [2024-11-27 12:06:58.874268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.874278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:08.860 [2024-11-27 12:06:58.874288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:23:08.860 [2024-11-27 12:06:58.874297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.874406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.860 [2024-11-27 12:06:58.874423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:08.860 [2024-11-27 12:06:58.874433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:08.860 [2024-11-27 12:06:58.874443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.860 [2024-11-27 12:06:58.874534] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:08.860 [2024-11-27 12:06:58.874547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:08.860 [2024-11-27 12:06:58.874557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:08.860 [2024-11-27 12:06:58.874587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:08.860 [2024-11-27 12:06:58.874617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.860 [2024-11-27 12:06:58.874636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:08.860 [2024-11-27 12:06:58.874656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:08.860 [2024-11-27 12:06:58.874665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.860 [2024-11-27 12:06:58.874674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:08.860 [2024-11-27 12:06:58.874684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:08.860 [2024-11-27 12:06:58.874693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:08.860 [2024-11-27 12:06:58.874711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:08.860 [2024-11-27 12:06:58.874739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:08.860 [2024-11-27 12:06:58.874766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:08.860 [2024-11-27 12:06:58.874792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:08.860 [2024-11-27 12:06:58.874819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.860 [2024-11-27 12:06:58.874837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:08.860 [2024-11-27 12:06:58.874846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:08.860 [2024-11-27 12:06:58.874854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.860 [2024-11-27 12:06:58.874863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:08.860 [2024-11-27 12:06:58.874871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:08.861 [2024-11-27 12:06:58.874880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.861 [2024-11-27 12:06:58.874889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:08.861 [2024-11-27 12:06:58.874898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:08.861 [2024-11-27 12:06:58.874907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.861 [2024-11-27 12:06:58.874915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:08.861 [2024-11-27 12:06:58.874924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:08.861 [2024-11-27 12:06:58.874934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.861 [2024-11-27 12:06:58.874943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:08.861 [2024-11-27 12:06:58.874953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:08.861 [2024-11-27 12:06:58.874967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.861 [2024-11-27 12:06:58.874977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.861 [2024-11-27 12:06:58.874987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:08.861 [2024-11-27 12:06:58.874996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:08.861 [2024-11-27 12:06:58.875005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:08.861 [2024-11-27 12:06:58.875015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:08.861 [2024-11-27 12:06:58.875023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:08.861 [2024-11-27 12:06:58.875032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:08.861 [2024-11-27 12:06:58.875042] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:08.861 [2024-11-27 12:06:58.875055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:08.861 [2024-11-27 12:06:58.875077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:08.861 [2024-11-27 12:06:58.875087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:08.861 [2024-11-27 12:06:58.875097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:08.861 [2024-11-27 12:06:58.875107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:08.861 [2024-11-27 12:06:58.875117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:08.861 [2024-11-27 12:06:58.875127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:08.861 [2024-11-27 12:06:58.875137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:08.861 [2024-11-27 12:06:58.875148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:08.861 [2024-11-27 12:06:58.875158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:08.861 [2024-11-27 12:06:58.875208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:08.861 [2024-11-27 12:06:58.875219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:08.861 [2024-11-27 12:06:58.875240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:08.861 [2024-11-27 12:06:58.875250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:08.861 [2024-11-27 12:06:58.875261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:08.861 [2024-11-27 12:06:58.875271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.861 [2024-11-27 12:06:58.875286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:08.861 [2024-11-27 12:06:58.875296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:23:08.861 [2024-11-27 12:06:58.875306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.912567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.912603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.121 [2024-11-27 12:06:58.912617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.266 ms 00:23:09.121 [2024-11-27 12:06:58.912627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.912745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.912758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.121 [2024-11-27 12:06:58.912769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:09.121 [2024-11-27 12:06:58.912778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.985564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.985747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.121 [2024-11-27 12:06:58.985792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.881 ms 00:23:09.121 [2024-11-27 12:06:58.985803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.985914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.985927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.121 [2024-11-27 12:06:58.985939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:09.121 [2024-11-27 12:06:58.985949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.986461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.986477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.121 [2024-11-27 12:06:58.986495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:23:09.121 [2024-11-27 12:06:58.986505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:58.986627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:58.986641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.121 [2024-11-27 12:06:58.986652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:09.121 [2024-11-27 12:06:58.986662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.006059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.006094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.121 [2024-11-27 12:06:59.006107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.406 ms 00:23:09.121 [2024-11-27 12:06:59.006118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.024456] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:09.121 [2024-11-27 12:06:59.024633] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.121 [2024-11-27 12:06:59.024653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.024665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.121 [2024-11-27 12:06:59.024676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.460 ms 00:23:09.121 [2024-11-27 12:06:59.024685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.051901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.052037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:09.121 [2024-11-27 12:06:59.052074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.179 ms 00:23:09.121 [2024-11-27 12:06:59.052086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.069806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.069839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:09.121 [2024-11-27 12:06:59.069851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.644 ms 00:23:09.121 [2024-11-27 12:06:59.069860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.087162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.087194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:09.121 [2024-11-27 12:06:59.087206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.257 ms 00:23:09.121 [2024-11-27 12:06:59.087215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.087946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.087971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:09.121 [2024-11-27 12:06:59.087982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:23:09.121 [2024-11-27 12:06:59.087992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.121 [2024-11-27 12:06:59.168925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.121 [2024-11-27 12:06:59.168976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:09.121 [2024-11-27 12:06:59.168993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.035 ms 00:23:09.121 [2024-11-27 12:06:59.169005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.380 [2024-11-27 12:06:59.180161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:09.380 [2024-11-27 12:06:59.196686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.380 [2024-11-27 12:06:59.196733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:09.380 [2024-11-27 12:06:59.196747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.625 ms 00:23:09.380 [2024-11-27 12:06:59.196780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.380 [2024-11-27 12:06:59.196909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.380 [2024-11-27 12:06:59.196923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:09.380 [2024-11-27 12:06:59.196934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.380 [2024-11-27 12:06:59.196944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.380 [2024-11-27 12:06:59.197000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.380 [2024-11-27 12:06:59.197012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:09.380 [2024-11-27 12:06:59.197022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:09.381 [2024-11-27 12:06:59.197037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.381 [2024-11-27 12:06:59.197071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.381 [2024-11-27 12:06:59.197084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:09.381 [2024-11-27 12:06:59.197094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:09.381 [2024-11-27 12:06:59.197104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.381 [2024-11-27 12:06:59.197141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:09.381 [2024-11-27 12:06:59.197153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.381 [2024-11-27 12:06:59.197163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:09.381 [2024-11-27 12:06:59.197173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:09.381 [2024-11-27 12:06:59.197182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.381 [2024-11-27 12:06:59.233551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.381 [2024-11-27 12:06:59.233588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:09.381 [2024-11-27 12:06:59.233602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.406 ms 00:23:09.381 [2024-11-27 12:06:59.233629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.381 [2024-11-27 12:06:59.233749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.381 [2024-11-27 12:06:59.233764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:09.381 [2024-11-27 12:06:59.233775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:09.381 [2024-11-27 12:06:59.233785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.381 [2024-11-27 12:06:59.234682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.381 [2024-11-27 12:06:59.238921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.905 ms, result 0 00:23:09.381 [2024-11-27 12:06:59.239711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:09.381 [2024-11-27 12:06:59.257963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.319  [2024-11-27T12:07:01.310Z] Copying: 27/256 [MB] (27 MBps) [2024-11-27T12:07:02.689Z] Copying: 53/256 [MB] (25 MBps) [2024-11-27T12:07:03.258Z] Copying: 78/256 [MB] (25 MBps) [2024-11-27T12:07:04.635Z] Copying: 103/256 [MB] (25 MBps) [2024-11-27T12:07:05.571Z] Copying: 128/256 [MB] (25 MBps) [2024-11-27T12:07:06.507Z] Copying: 153/256 [MB] (24 MBps) [2024-11-27T12:07:07.443Z] Copying: 177/256 [MB] (24 MBps) [2024-11-27T12:07:08.376Z] Copying: 202/256 [MB] (24 MBps) [2024-11-27T12:07:09.329Z] Copying: 227/256 [MB] (24 MBps) [2024-11-27T12:07:09.593Z] Copying: 251/256 [MB] (24 MBps) [2024-11-27T12:07:09.593Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-27 12:07:09.432609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.540 [2024-11-27 12:07:09.446922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.447055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:19.540 [2024-11-27 12:07:09.447202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.540 [2024-11-27 12:07:09.447238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.447288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:19.540 [2024-11-27 12:07:09.451223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.451369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:19.540 [2024-11-27 12:07:09.451449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:23:19.540 [2024-11-27 12:07:09.451482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.451740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.451859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:19.540 [2024-11-27 12:07:09.451897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:23:19.540 [2024-11-27 12:07:09.451925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.454801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.454896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:19.540 [2024-11-27 12:07:09.455046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:23:19.540 [2024-11-27 12:07:09.455061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.460554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.460687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:19.540 [2024-11-27 12:07:09.460803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.476 ms 00:23:19.540 [2024-11-27 12:07:09.460837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.493795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.493916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:19.540 [2024-11-27 12:07:09.493951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.926 ms 00:23:19.540 [2024-11-27 12:07:09.493961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.513923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.514078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:19.540 [2024-11-27 12:07:09.514105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.911 ms 00:23:19.540 [2024-11-27 12:07:09.514116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.514298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.514315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:19.540 [2024-11-27 12:07:09.514338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:19.540 [2024-11-27 12:07:09.514348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.548482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.548527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:19.540 [2024-11-27 12:07:09.548539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.158 ms 00:23:19.540 [2024-11-27 12:07:09.548548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.540 [2024-11-27 12:07:09.582417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.540 [2024-11-27 12:07:09.582570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:19.540 [2024-11-27 12:07:09.582589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.872 ms 00:23:19.540 [2024-11-27 12:07:09.582599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.801 [2024-11-27 12:07:09.617709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.801 [2024-11-27 12:07:09.617748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:19.801 [2024-11-27 12:07:09.617761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.080 ms 00:23:19.801 [2024-11-27 12:07:09.617769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.801 [2024-11-27 12:07:09.650562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.801 [2024-11-27 12:07:09.650691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:19.801 [2024-11-27 12:07:09.650725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.753 ms 00:23:19.801 [2024-11-27 12:07:09.650734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.801 [2024-11-27 12:07:09.650819] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:19.801 [2024-11-27 12:07:09.650835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:19.801 [2024-11-27 12:07:09.650846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.650995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:19.802 [2024-11-27 12:07:09.651612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:19.803 [2024-11-27 12:07:09.651900] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:19.803 [2024-11-27 12:07:09.651910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:19.803 [2024-11-27 12:07:09.651921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:19.803 [2024-11-27 12:07:09.651930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:19.803 [2024-11-27 12:07:09.651939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:19.803 [2024-11-27 12:07:09.651949] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:19.803 [2024-11-27 12:07:09.651958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:19.803 [2024-11-27 12:07:09.651967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:19.803 [2024-11-27 12:07:09.651981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:19.803 [2024-11-27 12:07:09.651990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:19.803 [2024-11-27 12:07:09.651999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:19.803 [2024-11-27 12:07:09.652008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.803 [2024-11-27 12:07:09.652017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:19.803 [2024-11-27 12:07:09.652028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:23:19.803 [2024-11-27 12:07:09.652037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.671406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.803 [2024-11-27 12:07:09.671437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:19.803 [2024-11-27 12:07:09.671448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.381 ms 00:23:19.803 [2024-11-27 12:07:09.671457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.672031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.803 [2024-11-27 12:07:09.672047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:19.803 [2024-11-27 12:07:09.672057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:23:19.803 [2024-11-27 12:07:09.672066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.725901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.803 [2024-11-27 12:07:09.726057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.803 [2024-11-27 12:07:09.726077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.803 [2024-11-27 12:07:09.726094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.726216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.803 [2024-11-27 12:07:09.726231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.803 [2024-11-27 12:07:09.726241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.803 [2024-11-27 12:07:09.726251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.726303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.803 [2024-11-27 12:07:09.726317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.803 [2024-11-27 12:07:09.726327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.803 [2024-11-27 12:07:09.726337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.726376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.803 [2024-11-27 12:07:09.726388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.803 [2024-11-27 12:07:09.726398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.803 [2024-11-27 12:07:09.726408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.803 [2024-11-27 12:07:09.845761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.803 [2024-11-27 12:07:09.845805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.803 [2024-11-27 12:07:09.845821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.803 [2024-11-27 12:07:09.845832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.942955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.942999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.065 [2024-11-27 12:07:09.943013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.065 [2024-11-27 12:07:09.943108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.065 [2024-11-27 12:07:09.943171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.065 [2024-11-27 12:07:09.943308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.065 [2024-11-27 12:07:09.943435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.065 [2024-11-27 12:07:09.943510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.065 [2024-11-27 12:07:09.943606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.065 [2024-11-27 12:07:09.943616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.065 [2024-11-27 12:07:09.943626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.065 [2024-11-27 12:07:09.943765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.642 ms, result 0 00:23:21.004 00:23:21.004 00:23:21.004 12:07:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:21.004 12:07:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:21.572 12:07:11 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:21.572 [2024-11-27 12:07:11.514579] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:21.572 [2024-11-27 12:07:11.515415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78670 ] 00:23:21.832 [2024-11-27 12:07:11.715538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.832 [2024-11-27 12:07:11.822025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.402 [2024-11-27 12:07:12.177714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.402 [2024-11-27 12:07:12.177781] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.402 [2024-11-27 12:07:12.339475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.402 [2024-11-27 12:07:12.339519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.402 [2024-11-27 12:07:12.339535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:22.402 [2024-11-27 12:07:12.339546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.402 [2024-11-27 12:07:12.342670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.402 [2024-11-27 12:07:12.342709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.403 [2024-11-27 12:07:12.342721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:23:22.403 [2024-11-27 12:07:12.342731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.342827] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.403 [2024-11-27 12:07:12.343750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.403 [2024-11-27 12:07:12.343777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.343787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.403 [2024-11-27 12:07:12.343797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:23:22.403 [2024-11-27 12:07:12.343806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.345271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:22.403 [2024-11-27 12:07:12.363850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.363888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:22.403 [2024-11-27 12:07:12.363901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.610 ms 00:23:22.403 [2024-11-27 12:07:12.363912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.364008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.364021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:22.403 [2024-11-27 12:07:12.364033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:22.403 [2024-11-27 12:07:12.364043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.370921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.370948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.403 [2024-11-27 12:07:12.370960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:23:22.403 [2024-11-27 12:07:12.370969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.371065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.371079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.403 [2024-11-27 12:07:12.371092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:22.403 [2024-11-27 12:07:12.371102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.371132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.371142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.403 [2024-11-27 12:07:12.371153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.403 [2024-11-27 12:07:12.371162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.371183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:22.403 [2024-11-27 12:07:12.375940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.375972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.403 [2024-11-27 12:07:12.375984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:23:22.403 [2024-11-27 12:07:12.376010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.376075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.376087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.403 [2024-11-27 12:07:12.376098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:22.403 [2024-11-27 12:07:12.376109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.376137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:22.403 [2024-11-27 12:07:12.376159] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:22.403 [2024-11-27 12:07:12.376193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:22.403 [2024-11-27 12:07:12.376211] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:22.403 [2024-11-27 12:07:12.376297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.403 [2024-11-27 12:07:12.376311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.403 [2024-11-27 12:07:12.376324] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:22.403 [2024-11-27 12:07:12.376340] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:22.403 [2024-11-27 12:07:12.376386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.403 [2024-11-27 12:07:12.376396] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.403 [2024-11-27 12:07:12.376407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.403 [2024-11-27 12:07:12.376421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.376430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.403 [2024-11-27 12:07:12.376441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:22.403 [2024-11-27 12:07:12.376450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.376524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.403 [2024-11-27 12:07:12.376538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.403 [2024-11-27 12:07:12.376548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:22.403 [2024-11-27 12:07:12.376558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.403 [2024-11-27 12:07:12.376648] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.403 [2024-11-27 12:07:12.376661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.403 [2024-11-27 12:07:12.376672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.403 [2024-11-27 12:07:12.376701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.403 [2024-11-27 12:07:12.376731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.403 [2024-11-27 12:07:12.376749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.403 [2024-11-27 12:07:12.376769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:22.403 [2024-11-27 12:07:12.376777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.403 [2024-11-27 12:07:12.376787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.403 [2024-11-27 12:07:12.376796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:22.403 [2024-11-27 12:07:12.376806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.403 [2024-11-27 12:07:12.376825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.403 [2024-11-27 12:07:12.376851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:22.403 [2024-11-27 12:07:12.376860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.403 [2024-11-27 12:07:12.376869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.404 [2024-11-27 12:07:12.376878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:22.404 [2024-11-27 12:07:12.376886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.404 [2024-11-27 12:07:12.376895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.404 [2024-11-27 12:07:12.376905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:22.404 [2024-11-27 12:07:12.376914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.404 [2024-11-27 12:07:12.376923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.404 [2024-11-27 12:07:12.376932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:22.404 [2024-11-27 12:07:12.376940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.404 [2024-11-27 12:07:12.376949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.404 [2024-11-27 12:07:12.376958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:22.404 [2024-11-27 12:07:12.376967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.404 [2024-11-27 12:07:12.376975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.404 [2024-11-27 12:07:12.376985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:22.404 [2024-11-27 12:07:12.376994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.404 [2024-11-27 12:07:12.377003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.404 [2024-11-27 12:07:12.377012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:22.404 [2024-11-27 12:07:12.377020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.404 [2024-11-27 12:07:12.377029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.404 [2024-11-27 12:07:12.377037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:22.404 [2024-11-27 12:07:12.377046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.404 [2024-11-27 12:07:12.377055] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.404 [2024-11-27 12:07:12.377065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.404 [2024-11-27 12:07:12.377078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.404 [2024-11-27 12:07:12.377087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.404 [2024-11-27 12:07:12.377097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.404 [2024-11-27 12:07:12.377106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.404 [2024-11-27 12:07:12.377115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.404 [2024-11-27 12:07:12.377124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.404 [2024-11-27 12:07:12.377132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.404 [2024-11-27 12:07:12.377141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.404 [2024-11-27 12:07:12.377151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.404 [2024-11-27 12:07:12.377163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:22.404 [2024-11-27 12:07:12.377183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:22.404 [2024-11-27 12:07:12.377192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:22.404 [2024-11-27 12:07:12.377202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:22.404 [2024-11-27 12:07:12.377212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:22.404 [2024-11-27 12:07:12.377222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:22.404 [2024-11-27 12:07:12.377232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:22.404 [2024-11-27 12:07:12.377242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:22.404 [2024-11-27 12:07:12.377252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:22.404 [2024-11-27 12:07:12.377262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:22.404 [2024-11-27 12:07:12.377312] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.404 [2024-11-27 12:07:12.377322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.404 [2024-11-27 12:07:12.377343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.404 [2024-11-27 12:07:12.377354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.404 [2024-11-27 12:07:12.377377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.404 [2024-11-27 12:07:12.377387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.404 [2024-11-27 12:07:12.377401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.404 [2024-11-27 12:07:12.377410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:23:22.404 [2024-11-27 12:07:12.377420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.404 [2024-11-27 12:07:12.415217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.404 [2024-11-27 12:07:12.415252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.404 [2024-11-27 12:07:12.415266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.803 ms 00:23:22.404 [2024-11-27 12:07:12.415277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.404 [2024-11-27 12:07:12.415416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.404 [2024-11-27 12:07:12.415431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.404 [2024-11-27 12:07:12.415443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:22.404 [2024-11-27 12:07:12.415453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.468729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.664 [2024-11-27 12:07:12.468763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.664 [2024-11-27 12:07:12.468780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.340 ms 00:23:22.664 [2024-11-27 12:07:12.468790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.468875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.664 [2024-11-27 12:07:12.468888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.664 [2024-11-27 12:07:12.468899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:22.664 [2024-11-27 12:07:12.468910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.469342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.664 [2024-11-27 12:07:12.469372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.664 [2024-11-27 12:07:12.469405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:23:22.664 [2024-11-27 12:07:12.469415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.469531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.664 [2024-11-27 12:07:12.469545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.664 [2024-11-27 12:07:12.469557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:22.664 [2024-11-27 12:07:12.469566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.489196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.664 [2024-11-27 12:07:12.489228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.664 [2024-11-27 12:07:12.489241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.639 ms 00:23:22.664 [2024-11-27 12:07:12.489267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.664 [2024-11-27 12:07:12.507914] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:22.664 [2024-11-27 12:07:12.507951] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:22.664 [2024-11-27 12:07:12.507965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.507992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:22.665 [2024-11-27 12:07:12.508003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:23:22.665 [2024-11-27 12:07:12.508013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.536089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.536126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:22.665 [2024-11-27 12:07:12.536139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:23:22.665 [2024-11-27 12:07:12.536150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.553869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.553903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:22.665 [2024-11-27 12:07:12.553915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.671 ms 00:23:22.665 [2024-11-27 12:07:12.553925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.570915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.570950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:22.665 [2024-11-27 12:07:12.570962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:23:22.665 [2024-11-27 12:07:12.570988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.571763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.571789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:22.665 [2024-11-27 12:07:12.571801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:23:22.665 [2024-11-27 12:07:12.571811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.652321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.652384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:22.665 [2024-11-27 12:07:12.652400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.612 ms 00:23:22.665 [2024-11-27 12:07:12.652411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.662399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:22.665 [2024-11-27 12:07:12.677874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.677916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:22.665 [2024-11-27 12:07:12.677932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.412 ms 00:23:22.665 [2024-11-27 12:07:12.677967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.678078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.678092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:22.665 [2024-11-27 12:07:12.678104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.665 [2024-11-27 12:07:12.678115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.678170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.678181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:22.665 [2024-11-27 12:07:12.678191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:22.665 [2024-11-27 12:07:12.678205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.678238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.678252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:22.665 [2024-11-27 12:07:12.678262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:22.665 [2024-11-27 12:07:12.678271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.678308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:22.665 [2024-11-27 12:07:12.678320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.678330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:22.665 [2024-11-27 12:07:12.678340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:22.665 [2024-11-27 12:07:12.678350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.712804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.712842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:22.665 [2024-11-27 12:07:12.712857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.488 ms 00:23:22.665 [2024-11-27 12:07:12.712868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.712985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.665 [2024-11-27 12:07:12.713000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:22.665 [2024-11-27 12:07:12.713011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:22.665 [2024-11-27 12:07:12.713021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.665 [2024-11-27 12:07:12.714088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.924 [2024-11-27 12:07:12.718193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.920 ms, result 0 00:23:22.924 [2024-11-27 12:07:12.719067] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:22.924 [2024-11-27 12:07:12.736642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.924  [2024-11-27T12:07:12.977Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-27 12:07:12.917549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:22.924 [2024-11-27 12:07:12.931079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.924 [2024-11-27 12:07:12.931114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:22.924 [2024-11-27 12:07:12.931131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:22.924 [2024-11-27 12:07:12.931157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.924 [2024-11-27 12:07:12.931178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:22.924 [2024-11-27 12:07:12.935268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.924 [2024-11-27 12:07:12.935295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:22.924 [2024-11-27 12:07:12.935306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:23:22.924 [2024-11-27 12:07:12.935316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.924 [2024-11-27 12:07:12.937266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.924 [2024-11-27 12:07:12.937303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:22.924 [2024-11-27 12:07:12.937315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.931 ms 00:23:22.924 [2024-11-27 12:07:12.937325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.924 [2024-11-27 12:07:12.940542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.925 [2024-11-27 12:07:12.940572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:22.925 [2024-11-27 12:07:12.940584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:23:22.925 [2024-11-27 12:07:12.940610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.925 [2024-11-27 12:07:12.946012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.925 [2024-11-27 12:07:12.946045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:22.925 [2024-11-27 12:07:12.946056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.373 ms 00:23:22.925 [2024-11-27 12:07:12.946066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.185 [2024-11-27 12:07:12.980683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.185 [2024-11-27 12:07:12.980726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.185 [2024-11-27 12:07:12.980738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.595 ms 00:23:23.185 [2024-11-27 12:07:12.980747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.185 [2024-11-27 12:07:13.000469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.185 [2024-11-27 12:07:13.000510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.185 [2024-11-27 12:07:13.000523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.670 ms 00:23:23.185 [2024-11-27 12:07:13.000532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.185 [2024-11-27 12:07:13.000655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.186 [2024-11-27 12:07:13.000668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.186 [2024-11-27 12:07:13.000690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:23.186 [2024-11-27 12:07:13.000699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.186 [2024-11-27 12:07:13.035150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.186 [2024-11-27 12:07:13.035183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.186 [2024-11-27 12:07:13.035195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.489 ms 00:23:23.186 [2024-11-27 12:07:13.035204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.186 [2024-11-27 12:07:13.070662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.186 [2024-11-27 12:07:13.070701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.186 [2024-11-27 12:07:13.070714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.463 ms 00:23:23.186 [2024-11-27 12:07:13.070724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.186 [2024-11-27 12:07:13.105259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.186 [2024-11-27 12:07:13.105293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.186 [2024-11-27 12:07:13.105305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.534 ms 00:23:23.186 [2024-11-27 12:07:13.105314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.186 [2024-11-27 12:07:13.139289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.186 [2024-11-27 12:07:13.139322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.186 [2024-11-27 12:07:13.139334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.947 ms 00:23:23.186 [2024-11-27 12:07:13.139344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.186 [2024-11-27 12:07:13.139400] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.186 [2024-11-27 12:07:13.139416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.139999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.186 [2024-11-27 12:07:13.140095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.187 [2024-11-27 12:07:13.140404] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.187 [2024-11-27 12:07:13.140413] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:23.187 [2024-11-27 12:07:13.140423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:23.187 [2024-11-27 12:07:13.140432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:23.187 [2024-11-27 12:07:13.140442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:23.187 [2024-11-27 12:07:13.140451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:23.187 [2024-11-27 12:07:13.140460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.187 [2024-11-27 12:07:13.140471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.187 [2024-11-27 12:07:13.140483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.187 [2024-11-27 12:07:13.140492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.187 [2024-11-27 12:07:13.140500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.187 [2024-11-27 12:07:13.140509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.187 [2024-11-27 12:07:13.140518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.187 [2024-11-27 12:07:13.140528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:23:23.187 [2024-11-27 12:07:13.140537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.159970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.187 [2024-11-27 12:07:13.160000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.187 [2024-11-27 12:07:13.160012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.445 ms 00:23:23.187 [2024-11-27 12:07:13.160021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.160541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.187 [2024-11-27 12:07:13.160559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.187 [2024-11-27 12:07:13.160570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:23:23.187 [2024-11-27 12:07:13.160579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.214124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.187 [2024-11-27 12:07:13.214157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.187 [2024-11-27 12:07:13.214169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.187 [2024-11-27 12:07:13.214199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.214281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.187 [2024-11-27 12:07:13.214293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.187 [2024-11-27 12:07:13.214305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.187 [2024-11-27 12:07:13.214314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.214364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.187 [2024-11-27 12:07:13.214390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.187 [2024-11-27 12:07:13.214401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.187 [2024-11-27 12:07:13.214411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.187 [2024-11-27 12:07:13.214434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.187 [2024-11-27 12:07:13.214444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.187 [2024-11-27 12:07:13.214455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.187 [2024-11-27 12:07:13.214465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.334958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.335002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.447 [2024-11-27 12:07:13.335017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.335032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.434624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.434678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.447 [2024-11-27 12:07:13.434694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.434705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.434786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.434798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.447 [2024-11-27 12:07:13.434809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.434820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.434850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.434867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.447 [2024-11-27 12:07:13.434878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.434888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.435007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.435020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.447 [2024-11-27 12:07:13.435031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.435041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.435098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.435111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:23.447 [2024-11-27 12:07:13.435126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.435137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.435177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.435189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.447 [2024-11-27 12:07:13.435199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.435209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.435251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.447 [2024-11-27 12:07:13.435266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.447 [2024-11-27 12:07:13.435277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.447 [2024-11-27 12:07:13.435286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.447 [2024-11-27 12:07:13.435439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.162 ms, result 0 00:23:24.825 00:23:24.826 00:23:24.826 12:07:14 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78710 00:23:24.826 12:07:14 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:24.826 12:07:14 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78710 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78710 ']' 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.826 12:07:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:24.826 [2024-11-27 12:07:14.591632] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:24.826 [2024-11-27 12:07:14.591762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78710 ] 00:23:24.826 [2024-11-27 12:07:14.768843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.826 [2024-11-27 12:07:14.874755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.763 12:07:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.763 12:07:15 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:25.763 12:07:15 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:26.021 [2024-11-27 12:07:15.941494] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.021 [2024-11-27 12:07:15.941569] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.281 [2024-11-27 12:07:16.124520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.281 [2024-11-27 12:07:16.124569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:26.281 [2024-11-27 12:07:16.124604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.281 [2024-11-27 12:07:16.124615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.127892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.127927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.282 [2024-11-27 12:07:16.127941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:23:26.282 [2024-11-27 12:07:16.127950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.128058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:26.282 [2024-11-27 12:07:16.129033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:26.282 [2024-11-27 12:07:16.129063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.129074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.282 [2024-11-27 12:07:16.129087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:23:26.282 [2024-11-27 12:07:16.129099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.130579] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:26.282 [2024-11-27 12:07:16.149259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.149304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:26.282 [2024-11-27 12:07:16.149318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.715 ms 00:23:26.282 [2024-11-27 12:07:16.149348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.149447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.149464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:26.282 [2024-11-27 12:07:16.149475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:26.282 [2024-11-27 12:07:16.149488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.156211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.156250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.282 [2024-11-27 12:07:16.156263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:23:26.282 [2024-11-27 12:07:16.156276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.156400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.156421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.282 [2024-11-27 12:07:16.156432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:26.282 [2024-11-27 12:07:16.156454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.156484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.156500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:26.282 [2024-11-27 12:07:16.156511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:26.282 [2024-11-27 12:07:16.156525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.156551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:26.282 [2024-11-27 12:07:16.161366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.161413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.282 [2024-11-27 12:07:16.161431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:23:26.282 [2024-11-27 12:07:16.161441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.161515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.161527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:26.282 [2024-11-27 12:07:16.161549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:26.282 [2024-11-27 12:07:16.161559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.161586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:26.282 [2024-11-27 12:07:16.161610] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:26.282 [2024-11-27 12:07:16.161660] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:26.282 [2024-11-27 12:07:16.161681] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:26.282 [2024-11-27 12:07:16.161782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:26.282 [2024-11-27 12:07:16.161797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:26.282 [2024-11-27 12:07:16.161823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:26.282 [2024-11-27 12:07:16.161836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:26.282 [2024-11-27 12:07:16.161853] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:26.282 [2024-11-27 12:07:16.161865] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:26.282 [2024-11-27 12:07:16.161879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:26.282 [2024-11-27 12:07:16.161889] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:26.282 [2024-11-27 12:07:16.161908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:26.282 [2024-11-27 12:07:16.161919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.161934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:26.282 [2024-11-27 12:07:16.161944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:23:26.282 [2024-11-27 12:07:16.161964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.162039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.282 [2024-11-27 12:07:16.162057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:26.282 [2024-11-27 12:07:16.162067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:26.282 [2024-11-27 12:07:16.162082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.282 [2024-11-27 12:07:16.162170] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:26.282 [2024-11-27 12:07:16.162187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:26.282 [2024-11-27 12:07:16.162198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:26.282 [2024-11-27 12:07:16.162238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:26.282 [2024-11-27 12:07:16.162277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.282 [2024-11-27 12:07:16.162301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:26.282 [2024-11-27 12:07:16.162315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:26.282 [2024-11-27 12:07:16.162324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:26.282 [2024-11-27 12:07:16.162339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:26.282 [2024-11-27 12:07:16.162348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:26.282 [2024-11-27 12:07:16.162373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:26.282 [2024-11-27 12:07:16.162398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:26.282 [2024-11-27 12:07:16.162443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:26.282 [2024-11-27 12:07:16.162484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:26.282 [2024-11-27 12:07:16.162517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:26.282 [2024-11-27 12:07:16.162555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:26.282 [2024-11-27 12:07:16.162578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:26.282 [2024-11-27 12:07:16.162587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.282 [2024-11-27 12:07:16.162612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:26.282 [2024-11-27 12:07:16.162626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:26.282 [2024-11-27 12:07:16.162635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:26.282 [2024-11-27 12:07:16.162649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:26.282 [2024-11-27 12:07:16.162659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:26.282 [2024-11-27 12:07:16.162676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:26.282 [2024-11-27 12:07:16.162701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:26.282 [2024-11-27 12:07:16.162710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.282 [2024-11-27 12:07:16.162724] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:26.283 [2024-11-27 12:07:16.162739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:26.283 [2024-11-27 12:07:16.162753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:26.283 [2024-11-27 12:07:16.162763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:26.283 [2024-11-27 12:07:16.162778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:26.283 [2024-11-27 12:07:16.162788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:26.283 [2024-11-27 12:07:16.162802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:26.283 [2024-11-27 12:07:16.162811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:26.283 [2024-11-27 12:07:16.162823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:26.283 [2024-11-27 12:07:16.162832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:26.283 [2024-11-27 12:07:16.162845] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:26.283 [2024-11-27 12:07:16.162857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.162887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:26.283 [2024-11-27 12:07:16.162897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:26.283 [2024-11-27 12:07:16.162914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:26.283 [2024-11-27 12:07:16.162924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:26.283 [2024-11-27 12:07:16.162937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:26.283 [2024-11-27 12:07:16.162946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:26.283 [2024-11-27 12:07:16.162959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:26.283 [2024-11-27 12:07:16.162969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:26.283 [2024-11-27 12:07:16.162981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:26.283 [2024-11-27 12:07:16.162991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:26.283 [2024-11-27 12:07:16.163048] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:26.283 [2024-11-27 12:07:16.163059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:26.283 [2024-11-27 12:07:16.163090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:26.283 [2024-11-27 12:07:16.163104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:26.283 [2024-11-27 12:07:16.163114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:26.283 [2024-11-27 12:07:16.163127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.163137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:26.283 [2024-11-27 12:07:16.163149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:23:26.283 [2024-11-27 12:07:16.163161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.202706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.202739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.283 [2024-11-27 12:07:16.202773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.547 ms 00:23:26.283 [2024-11-27 12:07:16.202789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.202910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.202923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:26.283 [2024-11-27 12:07:16.202939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:26.283 [2024-11-27 12:07:16.202949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.245122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.245167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.283 [2024-11-27 12:07:16.245184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.211 ms 00:23:26.283 [2024-11-27 12:07:16.245194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.245302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.245314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.283 [2024-11-27 12:07:16.245330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:26.283 [2024-11-27 12:07:16.245340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.245810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.245836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.283 [2024-11-27 12:07:16.245852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:23:26.283 [2024-11-27 12:07:16.245862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.245985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.246004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.283 [2024-11-27 12:07:16.246019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:26.283 [2024-11-27 12:07:16.246030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.266557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.266588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.283 [2024-11-27 12:07:16.266619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.535 ms 00:23:26.283 [2024-11-27 12:07:16.266630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.283 [2024-11-27 12:07:16.313280] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.283 [2024-11-27 12:07:16.313324] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:26.283 [2024-11-27 12:07:16.313351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.283 [2024-11-27 12:07:16.313371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:26.283 [2024-11-27 12:07:16.313387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.683 ms 00:23:26.283 [2024-11-27 12:07:16.313409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.342162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.342197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:26.543 [2024-11-27 12:07:16.342217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.708 ms 00:23:26.543 [2024-11-27 12:07:16.342228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.359401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.359434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:26.543 [2024-11-27 12:07:16.359452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.097 ms 00:23:26.543 [2024-11-27 12:07:16.359461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.376293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.376325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:26.543 [2024-11-27 12:07:16.376340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.766 ms 00:23:26.543 [2024-11-27 12:07:16.376348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.377116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.377143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:26.543 [2024-11-27 12:07:16.377158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:23:26.543 [2024-11-27 12:07:16.377169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.458101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.458157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.543 [2024-11-27 12:07:16.458175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.032 ms 00:23:26.543 [2024-11-27 12:07:16.458185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.468522] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:26.543 [2024-11-27 12:07:16.483756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.483814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.543 [2024-11-27 12:07:16.483828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:23:26.543 [2024-11-27 12:07:16.483856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.483945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.483960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:26.543 [2024-11-27 12:07:16.483972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:26.543 [2024-11-27 12:07:16.483984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.484038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.484051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:26.543 [2024-11-27 12:07:16.484061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:26.543 [2024-11-27 12:07:16.484076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.543 [2024-11-27 12:07:16.484101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.543 [2024-11-27 12:07:16.484114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:26.543 [2024-11-27 12:07:16.484124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.544 [2024-11-27 12:07:16.484136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.544 [2024-11-27 12:07:16.484173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:26.544 [2024-11-27 12:07:16.484190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.544 [2024-11-27 12:07:16.484219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:26.544 [2024-11-27 12:07:16.484232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:26.544 [2024-11-27 12:07:16.484245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.544 [2024-11-27 12:07:16.518749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.544 [2024-11-27 12:07:16.518784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:26.544 [2024-11-27 12:07:16.518800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.531 ms 00:23:26.544 [2024-11-27 12:07:16.518810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.544 [2024-11-27 12:07:16.518936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.544 [2024-11-27 12:07:16.518949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:26.544 [2024-11-27 12:07:16.518965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:26.544 [2024-11-27 12:07:16.518975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.544 [2024-11-27 12:07:16.519993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:26.544 [2024-11-27 12:07:16.524240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.824 ms, result 0 00:23:26.544 [2024-11-27 12:07:16.525511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:26.544 Some configs were skipped because the RPC state that can call them passed over. 00:23:26.544 12:07:16 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:26.803 [2024-11-27 12:07:16.765780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.803 [2024-11-27 12:07:16.765829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:26.803 [2024-11-27 12:07:16.765843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:23:26.803 [2024-11-27 12:07:16.765856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.803 [2024-11-27 12:07:16.765890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.957 ms, result 0 00:23:26.803 true 00:23:26.803 12:07:16 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:27.062 [2024-11-27 12:07:16.949169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.062 [2024-11-27 12:07:16.949207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:27.062 [2024-11-27 12:07:16.949224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:23:27.062 [2024-11-27 12:07:16.949234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.062 [2024-11-27 12:07:16.949272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.481 ms, result 0 00:23:27.062 true 00:23:27.062 12:07:16 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78710 00:23:27.062 12:07:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78710 ']' 00:23:27.062 12:07:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78710 00:23:27.062 12:07:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:27.062 12:07:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.062 12:07:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78710 00:23:27.062 killing process with pid 78710 00:23:27.062 12:07:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.062 12:07:17 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.062 12:07:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78710' 00:23:27.062 12:07:17 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78710 00:23:27.062 12:07:17 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78710 00:23:28.443 [2024-11-27 12:07:18.082824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.082875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:28.443 [2024-11-27 12:07:18.082907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:28.443 [2024-11-27 12:07:18.082919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.082945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:28.443 [2024-11-27 12:07:18.086981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.087012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:28.443 [2024-11-27 12:07:18.087030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:23:28.443 [2024-11-27 12:07:18.087040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.087281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.087294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:28.443 [2024-11-27 12:07:18.087306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:23:28.443 [2024-11-27 12:07:18.087316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.090694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.090732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:28.443 [2024-11-27 12:07:18.090746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:23:28.443 [2024-11-27 12:07:18.090757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.096344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.096380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:28.443 [2024-11-27 12:07:18.096411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.555 ms 00:23:28.443 [2024-11-27 12:07:18.096420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.111025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.111065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:28.443 [2024-11-27 12:07:18.111082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:23:28.443 [2024-11-27 12:07:18.111091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.121143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.121178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:28.443 [2024-11-27 12:07:18.121208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.999 ms 00:23:28.443 [2024-11-27 12:07:18.121218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.121342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.121372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:28.443 [2024-11-27 12:07:18.121386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:28.443 [2024-11-27 12:07:18.121396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.136475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.136506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:28.443 [2024-11-27 12:07:18.136536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.081 ms 00:23:28.443 [2024-11-27 12:07:18.136545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.150886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.150914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:28.443 [2024-11-27 12:07:18.150931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.315 ms 00:23:28.443 [2024-11-27 12:07:18.150940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.165073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.165104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:28.443 [2024-11-27 12:07:18.165119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.090 ms 00:23:28.443 [2024-11-27 12:07:18.165128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.179055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.443 [2024-11-27 12:07:18.179084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:28.443 [2024-11-27 12:07:18.179098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.870 ms 00:23:28.443 [2024-11-27 12:07:18.179107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.443 [2024-11-27 12:07:18.179177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:28.443 [2024-11-27 12:07:18.179192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:28.443 [2024-11-27 12:07:18.179387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.179992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:28.444 [2024-11-27 12:07:18.180424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:28.444 [2024-11-27 12:07:18.180440] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:28.444 [2024-11-27 12:07:18.180454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:28.444 [2024-11-27 12:07:18.180467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:28.444 [2024-11-27 12:07:18.180476] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:28.444 [2024-11-27 12:07:18.180488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:28.444 [2024-11-27 12:07:18.180498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:28.444 [2024-11-27 12:07:18.180511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:28.445 [2024-11-27 12:07:18.180520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:28.445 [2024-11-27 12:07:18.180531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:28.445 [2024-11-27 12:07:18.180540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:28.445 [2024-11-27 12:07:18.180552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.445 [2024-11-27 12:07:18.180562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:28.445 [2024-11-27 12:07:18.180575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:23:28.445 [2024-11-27 12:07:18.180587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.199785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.445 [2024-11-27 12:07:18.199815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:28.445 [2024-11-27 12:07:18.199831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.194 ms 00:23:28.445 [2024-11-27 12:07:18.199841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.200354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.445 [2024-11-27 12:07:18.200385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:28.445 [2024-11-27 12:07:18.200402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:23:28.445 [2024-11-27 12:07:18.200413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.269186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.269219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.445 [2024-11-27 12:07:18.269236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.269246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.269351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.269372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.445 [2024-11-27 12:07:18.269394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.269405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.269458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.269471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.445 [2024-11-27 12:07:18.269491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.269501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.269524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.269535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.445 [2024-11-27 12:07:18.269550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.269565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.387335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.387392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.445 [2024-11-27 12:07:18.387408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.387418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.445 [2024-11-27 12:07:18.484121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.445 [2024-11-27 12:07:18.484243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.445 [2024-11-27 12:07:18.484308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.445 [2024-11-27 12:07:18.484489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:28.445 [2024-11-27 12:07:18.484568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.445 [2024-11-27 12:07:18.484650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.445 [2024-11-27 12:07:18.484715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.445 [2024-11-27 12:07:18.484728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.445 [2024-11-27 12:07:18.484739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.445 [2024-11-27 12:07:18.484875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.679 ms, result 0 00:23:29.824 12:07:19 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:29.824 [2024-11-27 12:07:19.570154] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:29.824 [2024-11-27 12:07:19.570285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78775 ] 00:23:29.824 [2024-11-27 12:07:19.752169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.824 [2024-11-27 12:07:19.857833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.395 [2024-11-27 12:07:20.212051] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:30.395 [2024-11-27 12:07:20.212120] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:30.395 [2024-11-27 12:07:20.373713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.373766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:30.395 [2024-11-27 12:07:20.373781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:30.395 [2024-11-27 12:07:20.373792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.377016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.377052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.395 [2024-11-27 12:07:20.377065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:23:30.395 [2024-11-27 12:07:20.377074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.377182] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:30.395 [2024-11-27 12:07:20.378159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:30.395 [2024-11-27 12:07:20.378187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.378198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.395 [2024-11-27 12:07:20.378209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:23:30.395 [2024-11-27 12:07:20.378218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.379706] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:30.395 [2024-11-27 12:07:20.398456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.398493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:30.395 [2024-11-27 12:07:20.398507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.781 ms 00:23:30.395 [2024-11-27 12:07:20.398517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.398614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.398629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:30.395 [2024-11-27 12:07:20.398640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:30.395 [2024-11-27 12:07:20.398651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.405382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.405410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.395 [2024-11-27 12:07:20.405422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:23:30.395 [2024-11-27 12:07:20.405432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.405546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.405561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.395 [2024-11-27 12:07:20.405572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:30.395 [2024-11-27 12:07:20.405582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.395 [2024-11-27 12:07:20.405614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.395 [2024-11-27 12:07:20.405624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:30.395 [2024-11-27 12:07:20.405634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:30.396 [2024-11-27 12:07:20.405644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.396 [2024-11-27 12:07:20.405666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:30.396 [2024-11-27 12:07:20.410589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.396 [2024-11-27 12:07:20.410620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.396 [2024-11-27 12:07:20.410631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:23:30.396 [2024-11-27 12:07:20.410641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.396 [2024-11-27 12:07:20.410709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.396 [2024-11-27 12:07:20.410722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:30.396 [2024-11-27 12:07:20.410733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:30.396 [2024-11-27 12:07:20.410742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.396 [2024-11-27 12:07:20.410771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:30.396 [2024-11-27 12:07:20.410791] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:30.396 [2024-11-27 12:07:20.410824] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:30.396 [2024-11-27 12:07:20.410843] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:30.396 [2024-11-27 12:07:20.410931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:30.396 [2024-11-27 12:07:20.410943] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:30.396 [2024-11-27 12:07:20.410956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:30.396 [2024-11-27 12:07:20.410972] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:30.396 [2024-11-27 12:07:20.410984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:30.396 [2024-11-27 12:07:20.410995] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:30.396 [2024-11-27 12:07:20.411004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:30.396 [2024-11-27 12:07:20.411014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:30.396 [2024-11-27 12:07:20.411024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:30.396 [2024-11-27 12:07:20.411034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.396 [2024-11-27 12:07:20.411044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:30.396 [2024-11-27 12:07:20.411054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:30.396 [2024-11-27 12:07:20.411063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.396 [2024-11-27 12:07:20.411138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.396 [2024-11-27 12:07:20.411152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:30.396 [2024-11-27 12:07:20.411162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:30.396 [2024-11-27 12:07:20.411172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.396 [2024-11-27 12:07:20.411262] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:30.396 [2024-11-27 12:07:20.411274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:30.396 [2024-11-27 12:07:20.411284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:30.396 [2024-11-27 12:07:20.411314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:30.396 [2024-11-27 12:07:20.411344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.396 [2024-11-27 12:07:20.411376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:30.396 [2024-11-27 12:07:20.411397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:30.396 [2024-11-27 12:07:20.411406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:30.396 [2024-11-27 12:07:20.411415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:30.396 [2024-11-27 12:07:20.411425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:30.396 [2024-11-27 12:07:20.411434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:30.396 [2024-11-27 12:07:20.411453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:30.396 [2024-11-27 12:07:20.411481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:30.396 [2024-11-27 12:07:20.411508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:30.396 [2024-11-27 12:07:20.411535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:30.396 [2024-11-27 12:07:20.411562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:30.396 [2024-11-27 12:07:20.411589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.396 [2024-11-27 12:07:20.411607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:30.396 [2024-11-27 12:07:20.411616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:30.396 [2024-11-27 12:07:20.411625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:30.396 [2024-11-27 12:07:20.411634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:30.396 [2024-11-27 12:07:20.411643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:30.396 [2024-11-27 12:07:20.411653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:30.396 [2024-11-27 12:07:20.411672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:30.396 [2024-11-27 12:07:20.411681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411690] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:30.396 [2024-11-27 12:07:20.411699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:30.396 [2024-11-27 12:07:20.411712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:30.396 [2024-11-27 12:07:20.411731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:30.396 [2024-11-27 12:07:20.411741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:30.396 [2024-11-27 12:07:20.411750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:30.396 [2024-11-27 12:07:20.411759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:30.396 [2024-11-27 12:07:20.411768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:30.396 [2024-11-27 12:07:20.411777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:30.396 [2024-11-27 12:07:20.411788] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:30.396 [2024-11-27 12:07:20.411800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.396 [2024-11-27 12:07:20.411811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:30.396 [2024-11-27 12:07:20.411821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:30.396 [2024-11-27 12:07:20.411831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:30.396 [2024-11-27 12:07:20.411842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:30.396 [2024-11-27 12:07:20.411852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:30.396 [2024-11-27 12:07:20.411863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:30.396 [2024-11-27 12:07:20.411873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:30.396 [2024-11-27 12:07:20.411883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:30.396 [2024-11-27 12:07:20.411893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:30.396 [2024-11-27 12:07:20.411903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:30.396 [2024-11-27 12:07:20.411913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:30.396 [2024-11-27 12:07:20.411923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:30.396 [2024-11-27 12:07:20.411933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:30.396 [2024-11-27 12:07:20.411943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:30.397 [2024-11-27 12:07:20.411953] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:30.397 [2024-11-27 12:07:20.411964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:30.397 [2024-11-27 12:07:20.411975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:30.397 [2024-11-27 12:07:20.411985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:30.397 [2024-11-27 12:07:20.411995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:30.397 [2024-11-27 12:07:20.412005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:30.397 [2024-11-27 12:07:20.412016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.397 [2024-11-27 12:07:20.412030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:30.397 [2024-11-27 12:07:20.412039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:23:30.397 [2024-11-27 12:07:20.412049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.449000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.449034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.704 [2024-11-27 12:07:20.449047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.954 ms 00:23:30.704 [2024-11-27 12:07:20.449057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.449193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.449205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:30.704 [2024-11-27 12:07:20.449216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:30.704 [2024-11-27 12:07:20.449226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.504786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.504822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.704 [2024-11-27 12:07:20.504839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.628 ms 00:23:30.704 [2024-11-27 12:07:20.504865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.504956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.504969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.704 [2024-11-27 12:07:20.504980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:30.704 [2024-11-27 12:07:20.504989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.505446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.505467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.704 [2024-11-27 12:07:20.505484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:30.704 [2024-11-27 12:07:20.505495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.505613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.505627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.704 [2024-11-27 12:07:20.505637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:30.704 [2024-11-27 12:07:20.505647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.524085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.524115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.704 [2024-11-27 12:07:20.524128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.446 ms 00:23:30.704 [2024-11-27 12:07:20.524138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.542682] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:30.704 [2024-11-27 12:07:20.542717] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:30.704 [2024-11-27 12:07:20.542749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.542760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:30.704 [2024-11-27 12:07:20.542771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.525 ms 00:23:30.704 [2024-11-27 12:07:20.542781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.570954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.570987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:30.704 [2024-11-27 12:07:20.571000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.127 ms 00:23:30.704 [2024-11-27 12:07:20.571011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.588924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.588957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:30.704 [2024-11-27 12:07:20.588985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.848 ms 00:23:30.704 [2024-11-27 12:07:20.588994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.606304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.606335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:30.704 [2024-11-27 12:07:20.606362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:23:30.704 [2024-11-27 12:07:20.606383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.607181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.607207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:30.704 [2024-11-27 12:07:20.607219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:23:30.704 [2024-11-27 12:07:20.607229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.688237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.688330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:30.704 [2024-11-27 12:07:20.688346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.112 ms 00:23:30.704 [2024-11-27 12:07:20.688356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.698509] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:30.704 [2024-11-27 12:07:20.714066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.714112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:30.704 [2024-11-27 12:07:20.714126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.647 ms 00:23:30.704 [2024-11-27 12:07:20.714142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.714260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.714273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:30.704 [2024-11-27 12:07:20.714285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:30.704 [2024-11-27 12:07:20.714296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.714350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.714378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:30.704 [2024-11-27 12:07:20.714389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:30.704 [2024-11-27 12:07:20.714404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.714444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.714458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:30.704 [2024-11-27 12:07:20.714469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:30.704 [2024-11-27 12:07:20.714479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.704 [2024-11-27 12:07:20.714514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:30.704 [2024-11-27 12:07:20.714526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.704 [2024-11-27 12:07:20.714537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:30.705 [2024-11-27 12:07:20.714547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:30.705 [2024-11-27 12:07:20.714556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.018 [2024-11-27 12:07:20.750155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.018 [2024-11-27 12:07:20.750192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:31.018 [2024-11-27 12:07:20.750222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.632 ms 00:23:31.018 [2024-11-27 12:07:20.750233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.018 [2024-11-27 12:07:20.750349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.018 [2024-11-27 12:07:20.750379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:31.018 [2024-11-27 12:07:20.750391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:31.018 [2024-11-27 12:07:20.750401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.018 [2024-11-27 12:07:20.751397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.018 [2024-11-27 12:07:20.755549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.999 ms, result 0 00:23:31.018 [2024-11-27 12:07:20.756401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.018 [2024-11-27 12:07:20.774686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.957  [2024-11-27T12:07:22.949Z] Copying: 28/256 [MB] (28 MBps) [2024-11-27T12:07:23.886Z] Copying: 53/256 [MB] (25 MBps) [2024-11-27T12:07:24.823Z] Copying: 78/256 [MB] (25 MBps) [2024-11-27T12:07:26.202Z] Copying: 103/256 [MB] (25 MBps) [2024-11-27T12:07:27.142Z] Copying: 129/256 [MB] (25 MBps) [2024-11-27T12:07:28.083Z] Copying: 154/256 [MB] (24 MBps) [2024-11-27T12:07:29.018Z] Copying: 179/256 [MB] (25 MBps) [2024-11-27T12:07:29.956Z] Copying: 204/256 [MB] (24 MBps) [2024-11-27T12:07:30.890Z] Copying: 229/256 [MB] (25 MBps) [2024-11-27T12:07:30.890Z] Copying: 255/256 [MB] (25 MBps) [2024-11-27T12:07:31.460Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-27 12:07:31.193827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.407 [2024-11-27 12:07:31.210105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.407 [2024-11-27 12:07:31.210167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.407 [2024-11-27 12:07:31.210199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:41.407 [2024-11-27 12:07:31.210213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.407 [2024-11-27 12:07:31.210248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:41.407 [2024-11-27 12:07:31.215092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.407 [2024-11-27 12:07:31.215134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.408 [2024-11-27 12:07:31.215150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:23:41.408 [2024-11-27 12:07:31.215163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.215475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.215494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:41.408 [2024-11-27 12:07:31.215507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:23:41.408 [2024-11-27 12:07:31.215520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.218582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.218619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:41.408 [2024-11-27 12:07:31.218634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:23:41.408 [2024-11-27 12:07:31.218647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.224141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.224195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:41.408 [2024-11-27 12:07:31.224211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.468 ms 00:23:41.408 [2024-11-27 12:07:31.224224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.259122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.259172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:41.408 [2024-11-27 12:07:31.259188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.842 ms 00:23:41.408 [2024-11-27 12:07:31.259200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.281312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.281362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:41.408 [2024-11-27 12:07:31.281387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.080 ms 00:23:41.408 [2024-11-27 12:07:31.281400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.281553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.281578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:41.408 [2024-11-27 12:07:31.281607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:41.408 [2024-11-27 12:07:31.281619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.316535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.316577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:41.408 [2024-11-27 12:07:31.316592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.950 ms 00:23:41.408 [2024-11-27 12:07:31.316603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.350876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.350916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:41.408 [2024-11-27 12:07:31.350931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.263 ms 00:23:41.408 [2024-11-27 12:07:31.350943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.384984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.385026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:41.408 [2024-11-27 12:07:31.385040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.034 ms 00:23:41.408 [2024-11-27 12:07:31.385052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.418348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.408 [2024-11-27 12:07:31.418396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.408 [2024-11-27 12:07:31.418410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.257 ms 00:23:41.408 [2024-11-27 12:07:31.418421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.408 [2024-11-27 12:07:31.418484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.408 [2024-11-27 12:07:31.418504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.418991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.408 [2024-11-27 12:07:31.419130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.409 [2024-11-27 12:07:31.419733] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.409 [2024-11-27 12:07:31.419745] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ac0f0c90-8a1c-488f-b0e9-47cb15d830e6 00:23:41.409 [2024-11-27 12:07:31.419757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:41.409 [2024-11-27 12:07:31.419768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:41.409 [2024-11-27 12:07:31.419780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:41.409 [2024-11-27 12:07:31.419791] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:41.409 [2024-11-27 12:07:31.419802] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.409 [2024-11-27 12:07:31.419814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.409 [2024-11-27 12:07:31.419831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.409 [2024-11-27 12:07:31.419841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.409 [2024-11-27 12:07:31.419852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.409 [2024-11-27 12:07:31.419863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.409 [2024-11-27 12:07:31.419875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.409 [2024-11-27 12:07:31.419887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:23:41.409 [2024-11-27 12:07:31.419898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.409 [2024-11-27 12:07:31.439845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.409 [2024-11-27 12:07:31.439883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:41.409 [2024-11-27 12:07:31.439897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.956 ms 00:23:41.409 [2024-11-27 12:07:31.439909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.409 [2024-11-27 12:07:31.440523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.409 [2024-11-27 12:07:31.440545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:41.409 [2024-11-27 12:07:31.440559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:23:41.409 [2024-11-27 12:07:31.440571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.669 [2024-11-27 12:07:31.494759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.669 [2024-11-27 12:07:31.494798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.670 [2024-11-27 12:07:31.494813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.494831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.494920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.494934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.670 [2024-11-27 12:07:31.494947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.494958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.495019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.495034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.670 [2024-11-27 12:07:31.495047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.495058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.495086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.495099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.670 [2024-11-27 12:07:31.495110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.495122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.619502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.619560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.670 [2024-11-27 12:07:31.619577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.619589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.719408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.719459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.670 [2024-11-27 12:07:31.719476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.719488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.719601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.719616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.670 [2024-11-27 12:07:31.719629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.719642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.719681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.719703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.670 [2024-11-27 12:07:31.719716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.719727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.719849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.719865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.670 [2024-11-27 12:07:31.719878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.719890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.719938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.719952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:41.670 [2024-11-27 12:07:31.719970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.719982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.720038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.720052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.670 [2024-11-27 12:07:31.720064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.720075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.720130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.670 [2024-11-27 12:07:31.720148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.670 [2024-11-27 12:07:31.720161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.670 [2024-11-27 12:07:31.720172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.670 [2024-11-27 12:07:31.720379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.094 ms, result 0 00:23:43.049 00:23:43.049 00:23:43.049 12:07:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:43.308 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:43.308 12:07:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:43.567 12:07:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78710 00:23:43.567 12:07:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78710 ']' 00:23:43.567 12:07:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78710 00:23:43.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78710) - No such process 00:23:43.567 Process with pid 78710 is not found 00:23:43.567 12:07:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78710 is not found' 00:23:43.567 00:23:43.567 real 1m10.200s 00:23:43.567 user 1m36.178s 00:23:43.567 sys 0m6.778s 00:23:43.567 12:07:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.567 12:07:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:43.567 ************************************ 00:23:43.567 END TEST ftl_trim 00:23:43.567 ************************************ 00:23:43.567 12:07:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:43.567 12:07:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:43.567 12:07:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.567 12:07:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:43.567 ************************************ 00:23:43.567 START TEST ftl_restore 00:23:43.567 ************************************ 00:23:43.567 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:43.567 * Looking for test storage... 00:23:43.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.826 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:43.826 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:23:43.826 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:43.826 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.826 12:07:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.827 12:07:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.827 --rc genhtml_branch_coverage=1 00:23:43.827 --rc genhtml_function_coverage=1 00:23:43.827 --rc genhtml_legend=1 00:23:43.827 --rc geninfo_all_blocks=1 00:23:43.827 --rc geninfo_unexecuted_blocks=1 00:23:43.827 00:23:43.827 ' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.827 --rc genhtml_branch_coverage=1 00:23:43.827 --rc genhtml_function_coverage=1 00:23:43.827 --rc genhtml_legend=1 00:23:43.827 --rc geninfo_all_blocks=1 00:23:43.827 --rc geninfo_unexecuted_blocks=1 00:23:43.827 00:23:43.827 ' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.827 --rc genhtml_branch_coverage=1 00:23:43.827 --rc genhtml_function_coverage=1 00:23:43.827 --rc genhtml_legend=1 00:23:43.827 --rc geninfo_all_blocks=1 00:23:43.827 --rc geninfo_unexecuted_blocks=1 00:23:43.827 00:23:43.827 ' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.827 --rc genhtml_branch_coverage=1 00:23:43.827 --rc genhtml_function_coverage=1 00:23:43.827 --rc genhtml_legend=1 00:23:43.827 --rc geninfo_all_blocks=1 00:23:43.827 --rc geninfo_unexecuted_blocks=1 00:23:43.827 00:23:43.827 ' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Ze0d0zOI6j 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78986 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:43.827 12:07:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78986 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78986 ']' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.827 12:07:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:43.827 [2024-11-27 12:07:33.874831] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:43.827 [2024-11-27 12:07:33.874962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78986 ] 00:23:44.087 [2024-11-27 12:07:34.056711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.346 [2024-11-27 12:07:34.190869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.283 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.283 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:45.283 12:07:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:45.542 12:07:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:45.542 12:07:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:45.542 12:07:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:45.542 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:45.542 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:45.542 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:45.542 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:45.542 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:45.801 { 00:23:45.801 "name": "nvme0n1", 00:23:45.801 "aliases": [ 00:23:45.801 "1336b0d9-210c-481c-ad14-fac99e4540f1" 00:23:45.801 ], 00:23:45.801 "product_name": "NVMe disk", 00:23:45.801 "block_size": 4096, 00:23:45.801 "num_blocks": 1310720, 00:23:45.801 "uuid": "1336b0d9-210c-481c-ad14-fac99e4540f1", 00:23:45.801 "numa_id": -1, 00:23:45.801 "assigned_rate_limits": { 00:23:45.801 "rw_ios_per_sec": 0, 00:23:45.801 "rw_mbytes_per_sec": 0, 00:23:45.801 "r_mbytes_per_sec": 0, 00:23:45.801 "w_mbytes_per_sec": 0 00:23:45.801 }, 00:23:45.801 "claimed": true, 00:23:45.801 "claim_type": "read_many_write_one", 00:23:45.801 "zoned": false, 00:23:45.801 "supported_io_types": { 00:23:45.801 "read": true, 00:23:45.801 "write": true, 00:23:45.801 "unmap": true, 00:23:45.801 "flush": true, 00:23:45.801 "reset": true, 00:23:45.801 "nvme_admin": true, 00:23:45.801 "nvme_io": true, 00:23:45.801 "nvme_io_md": false, 00:23:45.801 "write_zeroes": true, 00:23:45.801 "zcopy": false, 00:23:45.801 "get_zone_info": false, 00:23:45.801 "zone_management": false, 00:23:45.801 "zone_append": false, 00:23:45.801 "compare": true, 00:23:45.801 "compare_and_write": false, 00:23:45.801 "abort": true, 00:23:45.801 "seek_hole": false, 00:23:45.801 "seek_data": false, 00:23:45.801 "copy": true, 00:23:45.801 "nvme_iov_md": false 00:23:45.801 }, 00:23:45.801 "driver_specific": { 00:23:45.801 "nvme": [ 00:23:45.801 { 00:23:45.801 "pci_address": "0000:00:11.0", 00:23:45.801 "trid": { 00:23:45.801 "trtype": "PCIe", 00:23:45.801 "traddr": "0000:00:11.0" 00:23:45.801 }, 00:23:45.801 "ctrlr_data": { 00:23:45.801 "cntlid": 0, 00:23:45.801 "vendor_id": "0x1b36", 00:23:45.801 "model_number": "QEMU NVMe Ctrl", 00:23:45.801 "serial_number": "12341", 00:23:45.801 "firmware_revision": "8.0.0", 00:23:45.801 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:45.801 "oacs": { 00:23:45.801 "security": 0, 00:23:45.801 "format": 1, 00:23:45.801 "firmware": 0, 00:23:45.801 "ns_manage": 1 00:23:45.801 }, 00:23:45.801 "multi_ctrlr": false, 00:23:45.801 "ana_reporting": false 00:23:45.801 }, 00:23:45.801 "vs": { 00:23:45.801 "nvme_version": "1.4" 00:23:45.801 }, 00:23:45.801 "ns_data": { 00:23:45.801 "id": 1, 00:23:45.801 "can_share": false 00:23:45.801 } 00:23:45.801 } 00:23:45.801 ], 00:23:45.801 "mp_policy": "active_passive" 00:23:45.801 } 00:23:45.801 } 00:23:45.801 ]' 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:45.801 12:07:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:45.801 12:07:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:45.801 12:07:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:45.801 12:07:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:45.801 12:07:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:45.801 12:07:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:46.060 12:07:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d910be63-3b61-4c9f-bad5-993166c52464 00:23:46.060 12:07:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:46.060 12:07:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d910be63-3b61-4c9f-bad5-993166c52464 00:23:46.319 12:07:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:46.319 12:07:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=84039ef2-2895-482c-a3db-17dd62429e84 00:23:46.319 12:07:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 84039ef2-2895-482c-a3db-17dd62429e84 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:46.578 12:07:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.578 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.578 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:46.578 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:46.578 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:46.578 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:46.837 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:46.837 { 00:23:46.837 "name": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:46.837 "aliases": [ 00:23:46.837 "lvs/nvme0n1p0" 00:23:46.837 ], 00:23:46.837 "product_name": "Logical Volume", 00:23:46.837 "block_size": 4096, 00:23:46.837 "num_blocks": 26476544, 00:23:46.837 "uuid": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:46.837 "assigned_rate_limits": { 00:23:46.837 "rw_ios_per_sec": 0, 00:23:46.837 "rw_mbytes_per_sec": 0, 00:23:46.837 "r_mbytes_per_sec": 0, 00:23:46.837 "w_mbytes_per_sec": 0 00:23:46.837 }, 00:23:46.837 "claimed": false, 00:23:46.837 "zoned": false, 00:23:46.837 "supported_io_types": { 00:23:46.837 "read": true, 00:23:46.837 "write": true, 00:23:46.837 "unmap": true, 00:23:46.837 "flush": false, 00:23:46.837 "reset": true, 00:23:46.837 "nvme_admin": false, 00:23:46.837 "nvme_io": false, 00:23:46.837 "nvme_io_md": false, 00:23:46.837 "write_zeroes": true, 00:23:46.837 "zcopy": false, 00:23:46.837 "get_zone_info": false, 00:23:46.837 "zone_management": false, 00:23:46.837 "zone_append": false, 00:23:46.837 "compare": false, 00:23:46.837 "compare_and_write": false, 00:23:46.837 "abort": false, 00:23:46.837 "seek_hole": true, 00:23:46.837 "seek_data": true, 00:23:46.837 "copy": false, 00:23:46.837 "nvme_iov_md": false 00:23:46.837 }, 00:23:46.837 "driver_specific": { 00:23:46.837 "lvol": { 00:23:46.837 "lvol_store_uuid": "84039ef2-2895-482c-a3db-17dd62429e84", 00:23:46.837 "base_bdev": "nvme0n1", 00:23:46.837 "thin_provision": true, 00:23:46.837 "num_allocated_clusters": 0, 00:23:46.837 "snapshot": false, 00:23:46.837 "clone": false, 00:23:46.837 "esnap_clone": false 00:23:46.837 } 00:23:46.837 } 00:23:46.837 } 00:23:46.837 ]' 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:46.838 12:07:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:46.838 12:07:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:46.838 12:07:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:46.838 12:07:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:47.097 12:07:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:47.097 12:07:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:47.097 12:07:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.097 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.097 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:47.097 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:47.097 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:47.097 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.355 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:47.355 { 00:23:47.355 "name": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:47.355 "aliases": [ 00:23:47.355 "lvs/nvme0n1p0" 00:23:47.355 ], 00:23:47.355 "product_name": "Logical Volume", 00:23:47.355 "block_size": 4096, 00:23:47.355 "num_blocks": 26476544, 00:23:47.355 "uuid": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:47.355 "assigned_rate_limits": { 00:23:47.355 "rw_ios_per_sec": 0, 00:23:47.355 "rw_mbytes_per_sec": 0, 00:23:47.355 "r_mbytes_per_sec": 0, 00:23:47.355 "w_mbytes_per_sec": 0 00:23:47.355 }, 00:23:47.355 "claimed": false, 00:23:47.355 "zoned": false, 00:23:47.355 "supported_io_types": { 00:23:47.355 "read": true, 00:23:47.356 "write": true, 00:23:47.356 "unmap": true, 00:23:47.356 "flush": false, 00:23:47.356 "reset": true, 00:23:47.356 "nvme_admin": false, 00:23:47.356 "nvme_io": false, 00:23:47.356 "nvme_io_md": false, 00:23:47.356 "write_zeroes": true, 00:23:47.356 "zcopy": false, 00:23:47.356 "get_zone_info": false, 00:23:47.356 "zone_management": false, 00:23:47.356 "zone_append": false, 00:23:47.356 "compare": false, 00:23:47.356 "compare_and_write": false, 00:23:47.356 "abort": false, 00:23:47.356 "seek_hole": true, 00:23:47.356 "seek_data": true, 00:23:47.356 "copy": false, 00:23:47.356 "nvme_iov_md": false 00:23:47.356 }, 00:23:47.356 "driver_specific": { 00:23:47.356 "lvol": { 00:23:47.356 "lvol_store_uuid": "84039ef2-2895-482c-a3db-17dd62429e84", 00:23:47.356 "base_bdev": "nvme0n1", 00:23:47.356 "thin_provision": true, 00:23:47.356 "num_allocated_clusters": 0, 00:23:47.356 "snapshot": false, 00:23:47.356 "clone": false, 00:23:47.356 "esnap_clone": false 00:23:47.356 } 00:23:47.356 } 00:23:47.356 } 00:23:47.356 ]' 00:23:47.356 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:47.356 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:47.356 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:47.614 12:07:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:47.614 12:07:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:47.614 12:07:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:47.614 12:07:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:47.614 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:47.615 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3c3bd97-c2b1-43b5-858f-18b426efd43d 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:47.874 { 00:23:47.874 "name": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:47.874 "aliases": [ 00:23:47.874 "lvs/nvme0n1p0" 00:23:47.874 ], 00:23:47.874 "product_name": "Logical Volume", 00:23:47.874 "block_size": 4096, 00:23:47.874 "num_blocks": 26476544, 00:23:47.874 "uuid": "c3c3bd97-c2b1-43b5-858f-18b426efd43d", 00:23:47.874 "assigned_rate_limits": { 00:23:47.874 "rw_ios_per_sec": 0, 00:23:47.874 "rw_mbytes_per_sec": 0, 00:23:47.874 "r_mbytes_per_sec": 0, 00:23:47.874 "w_mbytes_per_sec": 0 00:23:47.874 }, 00:23:47.874 "claimed": false, 00:23:47.874 "zoned": false, 00:23:47.874 "supported_io_types": { 00:23:47.874 "read": true, 00:23:47.874 "write": true, 00:23:47.874 "unmap": true, 00:23:47.874 "flush": false, 00:23:47.874 "reset": true, 00:23:47.874 "nvme_admin": false, 00:23:47.874 "nvme_io": false, 00:23:47.874 "nvme_io_md": false, 00:23:47.874 "write_zeroes": true, 00:23:47.874 "zcopy": false, 00:23:47.874 "get_zone_info": false, 00:23:47.874 "zone_management": false, 00:23:47.874 "zone_append": false, 00:23:47.874 "compare": false, 00:23:47.874 "compare_and_write": false, 00:23:47.874 "abort": false, 00:23:47.874 "seek_hole": true, 00:23:47.874 "seek_data": true, 00:23:47.874 "copy": false, 00:23:47.874 "nvme_iov_md": false 00:23:47.874 }, 00:23:47.874 "driver_specific": { 00:23:47.874 "lvol": { 00:23:47.874 "lvol_store_uuid": "84039ef2-2895-482c-a3db-17dd62429e84", 00:23:47.874 "base_bdev": "nvme0n1", 00:23:47.874 "thin_provision": true, 00:23:47.874 "num_allocated_clusters": 0, 00:23:47.874 "snapshot": false, 00:23:47.874 "clone": false, 00:23:47.874 "esnap_clone": false 00:23:47.874 } 00:23:47.874 } 00:23:47.874 } 00:23:47.874 ]' 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:47.874 12:07:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c3c3bd97-c2b1-43b5-858f-18b426efd43d --l2p_dram_limit 10' 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:47.874 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:47.874 12:07:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c3c3bd97-c2b1-43b5-858f-18b426efd43d --l2p_dram_limit 10 -c nvc0n1p0 00:23:48.134 [2024-11-27 12:07:38.099306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.099370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:48.134 [2024-11-27 12:07:38.099394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.134 [2024-11-27 12:07:38.099408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.099478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.099493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.134 [2024-11-27 12:07:38.099510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:48.134 [2024-11-27 12:07:38.099522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.099550] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:48.134 [2024-11-27 12:07:38.100469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:48.134 [2024-11-27 12:07:38.100510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.100525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.134 [2024-11-27 12:07:38.100542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:23:48.134 [2024-11-27 12:07:38.100555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.100638] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 46f1d584-88ce-4301-a84d-84f52c9539f7 00:23:48.134 [2024-11-27 12:07:38.103087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.103124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:48.134 [2024-11-27 12:07:38.103138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:48.134 [2024-11-27 12:07:38.103156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.117471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.117511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.134 [2024-11-27 12:07:38.117527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.271 ms 00:23:48.134 [2024-11-27 12:07:38.117543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.117661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.117681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.134 [2024-11-27 12:07:38.117694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:48.134 [2024-11-27 12:07:38.117722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.117810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.117834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:48.134 [2024-11-27 12:07:38.117847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:48.134 [2024-11-27 12:07:38.117863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.117891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:48.134 [2024-11-27 12:07:38.124160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.124198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.134 [2024-11-27 12:07:38.124229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.282 ms 00:23:48.134 [2024-11-27 12:07:38.124241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.124289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.134 [2024-11-27 12:07:38.124302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:48.134 [2024-11-27 12:07:38.124320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:48.134 [2024-11-27 12:07:38.124332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.134 [2024-11-27 12:07:38.124388] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:48.134 [2024-11-27 12:07:38.124526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:48.135 [2024-11-27 12:07:38.124552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:48.135 [2024-11-27 12:07:38.124569] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:48.135 [2024-11-27 12:07:38.124587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:48.135 [2024-11-27 12:07:38.124601] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:48.135 [2024-11-27 12:07:38.124623] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:48.135 [2024-11-27 12:07:38.124636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:48.135 [2024-11-27 12:07:38.124651] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:48.135 [2024-11-27 12:07:38.124663] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:48.135 [2024-11-27 12:07:38.124680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.135 [2024-11-27 12:07:38.124704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:48.135 [2024-11-27 12:07:38.124722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:48.135 [2024-11-27 12:07:38.124734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.135 [2024-11-27 12:07:38.124811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.135 [2024-11-27 12:07:38.124824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:48.135 [2024-11-27 12:07:38.124839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:48.135 [2024-11-27 12:07:38.124854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.135 [2024-11-27 12:07:38.124953] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:48.135 [2024-11-27 12:07:38.124967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:48.135 [2024-11-27 12:07:38.124983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.135 [2024-11-27 12:07:38.124995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:48.135 [2024-11-27 12:07:38.125022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:48.135 [2024-11-27 12:07:38.125062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.135 [2024-11-27 12:07:38.125089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:48.135 [2024-11-27 12:07:38.125100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:48.135 [2024-11-27 12:07:38.125115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:48.135 [2024-11-27 12:07:38.125126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:48.135 [2024-11-27 12:07:38.125141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:48.135 [2024-11-27 12:07:38.125152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:48.135 [2024-11-27 12:07:38.125182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:48.135 [2024-11-27 12:07:38.125221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:48.135 [2024-11-27 12:07:38.125257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:48.135 [2024-11-27 12:07:38.125294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:48.135 [2024-11-27 12:07:38.125329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:48.135 [2024-11-27 12:07:38.125385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.135 [2024-11-27 12:07:38.125410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:48.135 [2024-11-27 12:07:38.125421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:48.135 [2024-11-27 12:07:38.125436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:48.135 [2024-11-27 12:07:38.125447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:48.135 [2024-11-27 12:07:38.125462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:48.135 [2024-11-27 12:07:38.125473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:48.135 [2024-11-27 12:07:38.125499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:48.135 [2024-11-27 12:07:38.125513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125523] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:48.135 [2024-11-27 12:07:38.125540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:48.135 [2024-11-27 12:07:38.125552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:48.135 [2024-11-27 12:07:38.125583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:48.135 [2024-11-27 12:07:38.125601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:48.135 [2024-11-27 12:07:38.125612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:48.135 [2024-11-27 12:07:38.125626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:48.135 [2024-11-27 12:07:38.125636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:48.135 [2024-11-27 12:07:38.125651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:48.135 [2024-11-27 12:07:38.125668] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:48.135 [2024-11-27 12:07:38.125687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:48.135 [2024-11-27 12:07:38.125725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:48.135 [2024-11-27 12:07:38.125737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:48.135 [2024-11-27 12:07:38.125753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:48.135 [2024-11-27 12:07:38.125765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:48.135 [2024-11-27 12:07:38.125782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:48.135 [2024-11-27 12:07:38.125794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:48.135 [2024-11-27 12:07:38.125809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:48.135 [2024-11-27 12:07:38.125822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:48.135 [2024-11-27 12:07:38.125840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:48.135 [2024-11-27 12:07:38.125908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:48.135 [2024-11-27 12:07:38.125926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:48.135 [2024-11-27 12:07:38.125955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:48.135 [2024-11-27 12:07:38.125970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:48.135 [2024-11-27 12:07:38.125986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:48.135 [2024-11-27 12:07:38.125999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.135 [2024-11-27 12:07:38.126016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:48.135 [2024-11-27 12:07:38.126028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:23:48.135 [2024-11-27 12:07:38.126044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.135 [2024-11-27 12:07:38.126093] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:48.135 [2024-11-27 12:07:38.126115] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:52.330 [2024-11-27 12:07:41.575297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.575369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:52.330 [2024-11-27 12:07:41.575390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3454.801 ms 00:23:52.330 [2024-11-27 12:07:41.575407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.620053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.620114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.330 [2024-11-27 12:07:41.620133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.363 ms 00:23:52.330 [2024-11-27 12:07:41.620149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.620311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.620332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.330 [2024-11-27 12:07:41.620351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:52.330 [2024-11-27 12:07:41.620382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.673826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.673878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.330 [2024-11-27 12:07:41.673895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.463 ms 00:23:52.330 [2024-11-27 12:07:41.673911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.673960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.673977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.330 [2024-11-27 12:07:41.673991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.330 [2024-11-27 12:07:41.674022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.674882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.674912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.330 [2024-11-27 12:07:41.674927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:23:52.330 [2024-11-27 12:07:41.674942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.330 [2024-11-27 12:07:41.675055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.330 [2024-11-27 12:07:41.675072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.330 [2024-11-27 12:07:41.675084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:52.331 [2024-11-27 12:07:41.675102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.700411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.700459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.331 [2024-11-27 12:07:41.700476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.325 ms 00:23:52.331 [2024-11-27 12:07:41.700492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.727111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:52.331 [2024-11-27 12:07:41.732373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.732411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:52.331 [2024-11-27 12:07:41.732429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.811 ms 00:23:52.331 [2024-11-27 12:07:41.732442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.821525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.821572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:52.331 [2024-11-27 12:07:41.821593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.184 ms 00:23:52.331 [2024-11-27 12:07:41.821606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.821812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.821828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:52.331 [2024-11-27 12:07:41.821848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:52.331 [2024-11-27 12:07:41.821862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.856077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.856120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:52.331 [2024-11-27 12:07:41.856141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.207 ms 00:23:52.331 [2024-11-27 12:07:41.856157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.890003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.890045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:52.331 [2024-11-27 12:07:41.890065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.849 ms 00:23:52.331 [2024-11-27 12:07:41.890077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.890788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.890816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:52.331 [2024-11-27 12:07:41.890839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:23:52.331 [2024-11-27 12:07:41.890851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:41.989534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:41.989579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:52.331 [2024-11-27 12:07:41.989604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.775 ms 00:23:52.331 [2024-11-27 12:07:41.989619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.026451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:42.026493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:52.331 [2024-11-27 12:07:42.026513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.790 ms 00:23:52.331 [2024-11-27 12:07:42.026526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.061086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:42.061127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:52.331 [2024-11-27 12:07:42.061146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.566 ms 00:23:52.331 [2024-11-27 12:07:42.061158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.096417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:42.096458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:52.331 [2024-11-27 12:07:42.096478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.265 ms 00:23:52.331 [2024-11-27 12:07:42.096490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.096545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:42.096559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:52.331 [2024-11-27 12:07:42.096579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:52.331 [2024-11-27 12:07:42.096591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.096710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.331 [2024-11-27 12:07:42.096725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:52.331 [2024-11-27 12:07:42.096742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:52.331 [2024-11-27 12:07:42.096755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.331 [2024-11-27 12:07:42.098252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4004.898 ms, result 0 00:23:52.331 { 00:23:52.331 "name": "ftl0", 00:23:52.331 "uuid": "46f1d584-88ce-4301-a84d-84f52c9539f7" 00:23:52.331 } 00:23:52.331 12:07:42 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:52.331 12:07:42 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:52.331 12:07:42 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:52.331 12:07:42 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:52.591 [2024-11-27 12:07:42.504621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.504679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.591 [2024-11-27 12:07:42.504694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:52.591 [2024-11-27 12:07:42.504710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.504738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.591 [2024-11-27 12:07:42.509280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.509316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.591 [2024-11-27 12:07:42.509334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:23:52.591 [2024-11-27 12:07:42.509346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.509624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.509641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.591 [2024-11-27 12:07:42.509657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:23:52.591 [2024-11-27 12:07:42.509670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.512018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.512043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:52.591 [2024-11-27 12:07:42.512060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:23:52.591 [2024-11-27 12:07:42.512072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.516758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.516801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:52.591 [2024-11-27 12:07:42.516818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:23:52.591 [2024-11-27 12:07:42.516830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.550941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.550981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:52.591 [2024-11-27 12:07:42.551000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.104 ms 00:23:52.591 [2024-11-27 12:07:42.551012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.572633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.572676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:52.591 [2024-11-27 12:07:42.572696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.601 ms 00:23:52.591 [2024-11-27 12:07:42.572708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.572861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.572878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:52.591 [2024-11-27 12:07:42.572894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:52.591 [2024-11-27 12:07:42.572910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.607508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.607548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:52.591 [2024-11-27 12:07:42.607568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.627 ms 00:23:52.591 [2024-11-27 12:07:42.607579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.591 [2024-11-27 12:07:42.641564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.591 [2024-11-27 12:07:42.641604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:52.591 [2024-11-27 12:07:42.641623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.988 ms 00:23:52.591 [2024-11-27 12:07:42.641634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.852 [2024-11-27 12:07:42.675320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.852 [2024-11-27 12:07:42.675367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:52.852 [2024-11-27 12:07:42.675386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.686 ms 00:23:52.852 [2024-11-27 12:07:42.675407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.852 [2024-11-27 12:07:42.709318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.852 [2024-11-27 12:07:42.709371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:52.852 [2024-11-27 12:07:42.709390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.858 ms 00:23:52.852 [2024-11-27 12:07:42.709402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.852 [2024-11-27 12:07:42.709450] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:52.852 [2024-11-27 12:07:42.709473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.709989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:52.852 [2024-11-27 12:07:42.710480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:52.853 [2024-11-27 12:07:42.710942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:52.853 [2024-11-27 12:07:42.710957] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46f1d584-88ce-4301-a84d-84f52c9539f7 00:23:52.853 [2024-11-27 12:07:42.710970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:52.853 [2024-11-27 12:07:42.710992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:52.853 [2024-11-27 12:07:42.711004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:52.853 [2024-11-27 12:07:42.711019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:52.853 [2024-11-27 12:07:42.711030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:52.853 [2024-11-27 12:07:42.711045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:52.853 [2024-11-27 12:07:42.711056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:52.853 [2024-11-27 12:07:42.711070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:52.853 [2024-11-27 12:07:42.711081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:52.853 [2024-11-27 12:07:42.711096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.853 [2024-11-27 12:07:42.711108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:52.853 [2024-11-27 12:07:42.711128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:23:52.853 [2024-11-27 12:07:42.711144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.730503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.853 [2024-11-27 12:07:42.730541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:52.853 [2024-11-27 12:07:42.730561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.328 ms 00:23:52.853 [2024-11-27 12:07:42.730574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.731148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.853 [2024-11-27 12:07:42.731184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:52.853 [2024-11-27 12:07:42.731200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:23:52.853 [2024-11-27 12:07:42.731211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.798342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.853 [2024-11-27 12:07:42.798386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.853 [2024-11-27 12:07:42.798406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.853 [2024-11-27 12:07:42.798419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.798484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.853 [2024-11-27 12:07:42.798500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.853 [2024-11-27 12:07:42.798517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.853 [2024-11-27 12:07:42.798530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.798639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.853 [2024-11-27 12:07:42.798655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.853 [2024-11-27 12:07:42.798671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.853 [2024-11-27 12:07:42.798682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.853 [2024-11-27 12:07:42.798713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.853 [2024-11-27 12:07:42.798725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.853 [2024-11-27 12:07:42.798745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.853 [2024-11-27 12:07:42.798756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.113 [2024-11-27 12:07:42.925976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.113 [2024-11-27 12:07:42.926036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.113 [2024-11-27 12:07:42.926058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.113 [2024-11-27 12:07:42.926071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.027312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.027388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.114 [2024-11-27 12:07:43.027411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.027424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.027588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.027604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.114 [2024-11-27 12:07:43.027622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.027634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.027714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.027730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.114 [2024-11-27 12:07:43.027746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.027763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.027905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.027923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.114 [2024-11-27 12:07:43.027939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.027952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.028015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.028030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.114 [2024-11-27 12:07:43.028047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.028060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.028123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.028136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.114 [2024-11-27 12:07:43.028153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.028165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.028229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.114 [2024-11-27 12:07:43.028243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.114 [2024-11-27 12:07:43.028259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.114 [2024-11-27 12:07:43.028275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.114 [2024-11-27 12:07:43.028476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.641 ms, result 0 00:23:53.114 true 00:23:53.114 12:07:43 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78986 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78986 ']' 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78986 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78986 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.114 killing process with pid 78986 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78986' 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78986 00:23:53.114 12:07:43 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78986 00:23:58.391 12:07:48 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:02.587 262144+0 records in 00:24:02.587 262144+0 records out 00:24:02.587 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.94067 s, 272 MB/s 00:24:02.587 12:07:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:03.967 12:07:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:03.967 [2024-11-27 12:07:53.705191] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:03.967 [2024-11-27 12:07:53.705326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79222 ] 00:24:03.967 [2024-11-27 12:07:53.886709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.967 [2024-11-27 12:07:54.016555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.536 [2024-11-27 12:07:54.442154] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.536 [2024-11-27 12:07:54.442225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.796 [2024-11-27 12:07:54.608502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.796 [2024-11-27 12:07:54.608556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:04.796 [2024-11-27 12:07:54.608571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:04.796 [2024-11-27 12:07:54.608581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.796 [2024-11-27 12:07:54.608636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.796 [2024-11-27 12:07:54.608655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.796 [2024-11-27 12:07:54.608666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:04.796 [2024-11-27 12:07:54.608676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.796 [2024-11-27 12:07:54.608698] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:04.796 [2024-11-27 12:07:54.609571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:04.796 [2024-11-27 12:07:54.609600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.796 [2024-11-27 12:07:54.609612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.796 [2024-11-27 12:07:54.609623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:24:04.796 [2024-11-27 12:07:54.609633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.796 [2024-11-27 12:07:54.612103] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:04.796 [2024-11-27 12:07:54.631861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.631897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:04.797 [2024-11-27 12:07:54.631911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.790 ms 00:24:04.797 [2024-11-27 12:07:54.631922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.631999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.632012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:04.797 [2024-11-27 12:07:54.632023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:04.797 [2024-11-27 12:07:54.632033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.644417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.644443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.797 [2024-11-27 12:07:54.644457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:24:04.797 [2024-11-27 12:07:54.644475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.644578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.644593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.797 [2024-11-27 12:07:54.644603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:04.797 [2024-11-27 12:07:54.644613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.644667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.644679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:04.797 [2024-11-27 12:07:54.644690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:04.797 [2024-11-27 12:07:54.644699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.644729] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:04.797 [2024-11-27 12:07:54.650175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.650205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.797 [2024-11-27 12:07:54.650222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.462 ms 00:24:04.797 [2024-11-27 12:07:54.650232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.650264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.650275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:04.797 [2024-11-27 12:07:54.650286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:04.797 [2024-11-27 12:07:54.650295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.650331] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:04.797 [2024-11-27 12:07:54.650372] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:04.797 [2024-11-27 12:07:54.650411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:04.797 [2024-11-27 12:07:54.650433] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:04.797 [2024-11-27 12:07:54.650522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:04.797 [2024-11-27 12:07:54.650536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:04.797 [2024-11-27 12:07:54.650549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:04.797 [2024-11-27 12:07:54.650561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:04.797 [2024-11-27 12:07:54.650572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:04.797 [2024-11-27 12:07:54.650583] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:04.797 [2024-11-27 12:07:54.650592] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:04.797 [2024-11-27 12:07:54.650606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:04.797 [2024-11-27 12:07:54.650615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:04.797 [2024-11-27 12:07:54.650626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.650635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:04.797 [2024-11-27 12:07:54.650645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:04.797 [2024-11-27 12:07:54.650655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.650723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.797 [2024-11-27 12:07:54.650734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:04.797 [2024-11-27 12:07:54.650744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:04.797 [2024-11-27 12:07:54.650754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.797 [2024-11-27 12:07:54.650849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:04.797 [2024-11-27 12:07:54.650864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:04.797 [2024-11-27 12:07:54.650877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.797 [2024-11-27 12:07:54.650887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.650897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:04.797 [2024-11-27 12:07:54.650907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.650917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:04.797 [2024-11-27 12:07:54.650926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:04.797 [2024-11-27 12:07:54.650936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:04.797 [2024-11-27 12:07:54.650944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.797 [2024-11-27 12:07:54.650954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:04.797 [2024-11-27 12:07:54.650962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:04.797 [2024-11-27 12:07:54.650971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.797 [2024-11-27 12:07:54.650990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:04.797 [2024-11-27 12:07:54.650999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:04.797 [2024-11-27 12:07:54.651008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:04.797 [2024-11-27 12:07:54.651026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:04.797 [2024-11-27 12:07:54.651052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:04.797 [2024-11-27 12:07:54.651079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:04.797 [2024-11-27 12:07:54.651104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:04.797 [2024-11-27 12:07:54.651130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:04.797 [2024-11-27 12:07:54.651155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.797 [2024-11-27 12:07:54.651173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:04.797 [2024-11-27 12:07:54.651181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:04.797 [2024-11-27 12:07:54.651190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.797 [2024-11-27 12:07:54.651200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:04.797 [2024-11-27 12:07:54.651209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:04.797 [2024-11-27 12:07:54.651217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:04.797 [2024-11-27 12:07:54.651234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:04.797 [2024-11-27 12:07:54.651243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651251] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:04.797 [2024-11-27 12:07:54.651261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:04.797 [2024-11-27 12:07:54.651270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.797 [2024-11-27 12:07:54.651289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:04.797 [2024-11-27 12:07:54.651299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:04.797 [2024-11-27 12:07:54.651307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:04.797 [2024-11-27 12:07:54.651316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:04.797 [2024-11-27 12:07:54.651324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:04.797 [2024-11-27 12:07:54.651332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:04.797 [2024-11-27 12:07:54.651342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:04.797 [2024-11-27 12:07:54.651354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:04.798 [2024-11-27 12:07:54.651392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:04.798 [2024-11-27 12:07:54.651401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:04.798 [2024-11-27 12:07:54.651412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:04.798 [2024-11-27 12:07:54.651425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:04.798 [2024-11-27 12:07:54.651436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:04.798 [2024-11-27 12:07:54.651446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:04.798 [2024-11-27 12:07:54.651456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:04.798 [2024-11-27 12:07:54.651466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:04.798 [2024-11-27 12:07:54.651475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:04.798 [2024-11-27 12:07:54.651526] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:04.798 [2024-11-27 12:07:54.651537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:04.798 [2024-11-27 12:07:54.651557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:04.798 [2024-11-27 12:07:54.651566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:04.798 [2024-11-27 12:07:54.651576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:04.798 [2024-11-27 12:07:54.651586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.651596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:04.798 [2024-11-27 12:07:54.651606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:24:04.798 [2024-11-27 12:07:54.651615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.700148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.700181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.798 [2024-11-27 12:07:54.700194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.563 ms 00:24:04.798 [2024-11-27 12:07:54.700210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.700286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.700297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.798 [2024-11-27 12:07:54.700308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:04.798 [2024-11-27 12:07:54.700318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.778578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.778615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.798 [2024-11-27 12:07:54.778629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.295 ms 00:24:04.798 [2024-11-27 12:07:54.778640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.778685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.778702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.798 [2024-11-27 12:07:54.778714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.798 [2024-11-27 12:07:54.778724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.779555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.779577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.798 [2024-11-27 12:07:54.779590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:24:04.798 [2024-11-27 12:07:54.779600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.779733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.779747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.798 [2024-11-27 12:07:54.779772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:04.798 [2024-11-27 12:07:54.779782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.802628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.802664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.798 [2024-11-27 12:07:54.802677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.861 ms 00:24:04.798 [2024-11-27 12:07:54.802688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.798 [2024-11-27 12:07:54.822097] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:04.798 [2024-11-27 12:07:54.822134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:04.798 [2024-11-27 12:07:54.822150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.798 [2024-11-27 12:07:54.822161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:04.798 [2024-11-27 12:07:54.822172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.362 ms 00:24:04.798 [2024-11-27 12:07:54.822182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.850903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.850951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:05.058 [2024-11-27 12:07:54.850964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.721 ms 00:24:05.058 [2024-11-27 12:07:54.850975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.868258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.868292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:05.058 [2024-11-27 12:07:54.868305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:24:05.058 [2024-11-27 12:07:54.868316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.885044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.885077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:05.058 [2024-11-27 12:07:54.885090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.717 ms 00:24:05.058 [2024-11-27 12:07:54.885100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.885857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.885883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:05.058 [2024-11-27 12:07:54.885894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:05.058 [2024-11-27 12:07:54.885912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.980802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.980855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:05.058 [2024-11-27 12:07:54.980873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.021 ms 00:24:05.058 [2024-11-27 12:07:54.980889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.990580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:05.058 [2024-11-27 12:07:54.993690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.993728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:05.058 [2024-11-27 12:07:54.993742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:24:05.058 [2024-11-27 12:07:54.993753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.993831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.993846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:05.058 [2024-11-27 12:07:54.993858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:05.058 [2024-11-27 12:07:54.993868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.993952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.993968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:05.058 [2024-11-27 12:07:54.993981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:05.058 [2024-11-27 12:07:54.993991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.994014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.994025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:05.058 [2024-11-27 12:07:54.994035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:05.058 [2024-11-27 12:07:54.994045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:54.994087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:05.058 [2024-11-27 12:07:54.994103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:54.994113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:05.058 [2024-11-27 12:07:54.994123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:05.058 [2024-11-27 12:07:54.994134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:55.029354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:55.029398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:05.058 [2024-11-27 12:07:55.029412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.257 ms 00:24:05.058 [2024-11-27 12:07:55.029430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:55.029576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.058 [2024-11-27 12:07:55.029595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:05.058 [2024-11-27 12:07:55.029606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:05.058 [2024-11-27 12:07:55.029616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.058 [2024-11-27 12:07:55.031045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 422.706 ms, result 0 00:24:05.996  [2024-11-27T12:07:57.079Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-27T12:07:58.459Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-27T12:07:59.397Z] Copying: 70/1024 [MB] (23 MBps) [2024-11-27T12:08:00.335Z] Copying: 94/1024 [MB] (24 MBps) [2024-11-27T12:08:01.271Z] Copying: 117/1024 [MB] (23 MBps) [2024-11-27T12:08:02.205Z] Copying: 142/1024 [MB] (24 MBps) [2024-11-27T12:08:03.141Z] Copying: 167/1024 [MB] (25 MBps) [2024-11-27T12:08:04.079Z] Copying: 192/1024 [MB] (24 MBps) [2024-11-27T12:08:05.459Z] Copying: 217/1024 [MB] (24 MBps) [2024-11-27T12:08:06.027Z] Copying: 241/1024 [MB] (24 MBps) [2024-11-27T12:08:07.406Z] Copying: 266/1024 [MB] (25 MBps) [2024-11-27T12:08:08.344Z] Copying: 291/1024 [MB] (24 MBps) [2024-11-27T12:08:09.281Z] Copying: 316/1024 [MB] (24 MBps) [2024-11-27T12:08:10.217Z] Copying: 341/1024 [MB] (25 MBps) [2024-11-27T12:08:11.154Z] Copying: 366/1024 [MB] (24 MBps) [2024-11-27T12:08:12.088Z] Copying: 390/1024 [MB] (24 MBps) [2024-11-27T12:08:13.026Z] Copying: 415/1024 [MB] (24 MBps) [2024-11-27T12:08:14.402Z] Copying: 440/1024 [MB] (24 MBps) [2024-11-27T12:08:15.387Z] Copying: 464/1024 [MB] (24 MBps) [2024-11-27T12:08:16.322Z] Copying: 489/1024 [MB] (24 MBps) [2024-11-27T12:08:17.257Z] Copying: 514/1024 [MB] (24 MBps) [2024-11-27T12:08:18.193Z] Copying: 538/1024 [MB] (24 MBps) [2024-11-27T12:08:19.130Z] Copying: 562/1024 [MB] (24 MBps) [2024-11-27T12:08:20.067Z] Copying: 587/1024 [MB] (24 MBps) [2024-11-27T12:08:21.004Z] Copying: 611/1024 [MB] (23 MBps) [2024-11-27T12:08:22.382Z] Copying: 636/1024 [MB] (24 MBps) [2024-11-27T12:08:23.317Z] Copying: 661/1024 [MB] (25 MBps) [2024-11-27T12:08:24.254Z] Copying: 686/1024 [MB] (24 MBps) [2024-11-27T12:08:25.189Z] Copying: 711/1024 [MB] (24 MBps) [2024-11-27T12:08:26.125Z] Copying: 735/1024 [MB] (24 MBps) [2024-11-27T12:08:27.063Z] Copying: 761/1024 [MB] (25 MBps) [2024-11-27T12:08:28.000Z] Copying: 785/1024 [MB] (24 MBps) [2024-11-27T12:08:29.012Z] Copying: 809/1024 [MB] (23 MBps) [2024-11-27T12:08:30.394Z] Copying: 832/1024 [MB] (23 MBps) [2024-11-27T12:08:31.341Z] Copying: 854/1024 [MB] (22 MBps) [2024-11-27T12:08:32.280Z] Copying: 877/1024 [MB] (22 MBps) [2024-11-27T12:08:33.218Z] Copying: 900/1024 [MB] (22 MBps) [2024-11-27T12:08:34.155Z] Copying: 922/1024 [MB] (22 MBps) [2024-11-27T12:08:35.105Z] Copying: 947/1024 [MB] (24 MBps) [2024-11-27T12:08:36.041Z] Copying: 972/1024 [MB] (24 MBps) [2024-11-27T12:08:36.979Z] Copying: 997/1024 [MB] (24 MBps) [2024-11-27T12:08:37.239Z] Copying: 1021/1024 [MB] (24 MBps) [2024-11-27T12:08:37.239Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 12:08:37.058296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.058376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.186 [2024-11-27 12:08:37.058393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:47.186 [2024-11-27 12:08:37.058404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.058429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.186 [2024-11-27 12:08:37.062571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.062606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.186 [2024-11-27 12:08:37.062624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:24:47.186 [2024-11-27 12:08:37.062635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.064374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.064415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.186 [2024-11-27 12:08:37.064427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:24:47.186 [2024-11-27 12:08:37.064438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.081914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.081954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.186 [2024-11-27 12:08:37.081968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.486 ms 00:24:47.186 [2024-11-27 12:08:37.081978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.086731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.086763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.186 [2024-11-27 12:08:37.086775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.719 ms 00:24:47.186 [2024-11-27 12:08:37.086800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.121649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.121684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.186 [2024-11-27 12:08:37.121695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.849 ms 00:24:47.186 [2024-11-27 12:08:37.121711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.141813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.141851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.186 [2024-11-27 12:08:37.141880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.080 ms 00:24:47.186 [2024-11-27 12:08:37.141890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.142011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.142030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.186 [2024-11-27 12:08:37.142040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:47.186 [2024-11-27 12:08:37.142050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.178433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.178475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.186 [2024-11-27 12:08:37.178488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.426 ms 00:24:47.186 [2024-11-27 12:08:37.178498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.186 [2024-11-27 12:08:37.214361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.186 [2024-11-27 12:08:37.214406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.186 [2024-11-27 12:08:37.214419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.880 ms 00:24:47.186 [2024-11-27 12:08:37.214429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.446 [2024-11-27 12:08:37.249235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.447 [2024-11-27 12:08:37.249271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.447 [2024-11-27 12:08:37.249283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.825 ms 00:24:47.447 [2024-11-27 12:08:37.249292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.447 [2024-11-27 12:08:37.283482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.447 [2024-11-27 12:08:37.283517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.447 [2024-11-27 12:08:37.283530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.164 ms 00:24:47.447 [2024-11-27 12:08:37.283539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.447 [2024-11-27 12:08:37.283571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.447 [2024-11-27 12:08:37.283586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.283991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.447 [2024-11-27 12:08:37.284413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.448 [2024-11-27 12:08:37.284632] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.448 [2024-11-27 12:08:37.284644] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46f1d584-88ce-4301-a84d-84f52c9539f7 00:24:47.448 [2024-11-27 12:08:37.284655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.448 [2024-11-27 12:08:37.284664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.448 [2024-11-27 12:08:37.284673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.448 [2024-11-27 12:08:37.284683] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.448 [2024-11-27 12:08:37.284692] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.448 [2024-11-27 12:08:37.284711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.448 [2024-11-27 12:08:37.284720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.448 [2024-11-27 12:08:37.284728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.448 [2024-11-27 12:08:37.284737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.448 [2024-11-27 12:08:37.284747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.448 [2024-11-27 12:08:37.284756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.448 [2024-11-27 12:08:37.284767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:24:47.448 [2024-11-27 12:08:37.284776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.303726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.448 [2024-11-27 12:08:37.303759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.448 [2024-11-27 12:08:37.303771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.942 ms 00:24:47.448 [2024-11-27 12:08:37.303781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.304325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.448 [2024-11-27 12:08:37.304347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.448 [2024-11-27 12:08:37.304370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:24:47.448 [2024-11-27 12:08:37.304386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.355888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.448 [2024-11-27 12:08:37.355928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.448 [2024-11-27 12:08:37.355941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.448 [2024-11-27 12:08:37.355953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.356008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.448 [2024-11-27 12:08:37.356019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.448 [2024-11-27 12:08:37.356030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.448 [2024-11-27 12:08:37.356045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.356108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.448 [2024-11-27 12:08:37.356121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.448 [2024-11-27 12:08:37.356131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.448 [2024-11-27 12:08:37.356142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.356158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.448 [2024-11-27 12:08:37.356186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.448 [2024-11-27 12:08:37.356197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.448 [2024-11-27 12:08:37.356207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.448 [2024-11-27 12:08:37.469273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.448 [2024-11-27 12:08:37.469324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.448 [2024-11-27 12:08:37.469338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.448 [2024-11-27 12:08:37.469348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.708 [2024-11-27 12:08:37.564190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.708 [2024-11-27 12:08:37.564336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.708 [2024-11-27 12:08:37.564423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.708 [2024-11-27 12:08:37.564585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.708 [2024-11-27 12:08:37.564672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.708 [2024-11-27 12:08:37.564747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.708 [2024-11-27 12:08:37.564810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.708 [2024-11-27 12:08:37.564820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.708 [2024-11-27 12:08:37.564830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.708 [2024-11-27 12:08:37.564950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.443 ms, result 0 00:24:49.088 00:24:49.088 00:24:49.088 12:08:38 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:49.088 [2024-11-27 12:08:38.849958] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:49.088 [2024-11-27 12:08:38.850072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79684 ] 00:24:49.089 [2024-11-27 12:08:39.028455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.089 [2024-11-27 12:08:39.132312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.658 [2024-11-27 12:08:39.462756] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.658 [2024-11-27 12:08:39.462815] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.658 [2024-11-27 12:08:39.622644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.622696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.658 [2024-11-27 12:08:39.622711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.658 [2024-11-27 12:08:39.622722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.622768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.622782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.658 [2024-11-27 12:08:39.622792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:49.658 [2024-11-27 12:08:39.622802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.622822] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.658 [2024-11-27 12:08:39.623806] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.658 [2024-11-27 12:08:39.623836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.623847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.658 [2024-11-27 12:08:39.623858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:24:49.658 [2024-11-27 12:08:39.623868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.625295] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.658 [2024-11-27 12:08:39.644002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.644043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.658 [2024-11-27 12:08:39.644058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.739 ms 00:24:49.658 [2024-11-27 12:08:39.644069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.644134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.644147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.658 [2024-11-27 12:08:39.644158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:49.658 [2024-11-27 12:08:39.644168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.651112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.651141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.658 [2024-11-27 12:08:39.651152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:24:49.658 [2024-11-27 12:08:39.651166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.651239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.651252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.658 [2024-11-27 12:08:39.651263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:49.658 [2024-11-27 12:08:39.651272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.651310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.651322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.658 [2024-11-27 12:08:39.651332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:49.658 [2024-11-27 12:08:39.651342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.651379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.658 [2024-11-27 12:08:39.656148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.656182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.658 [2024-11-27 12:08:39.656197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.781 ms 00:24:49.658 [2024-11-27 12:08:39.656207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.656243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.656254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.658 [2024-11-27 12:08:39.656265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:49.658 [2024-11-27 12:08:39.656275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.656326] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.658 [2024-11-27 12:08:39.656349] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.658 [2024-11-27 12:08:39.656421] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.658 [2024-11-27 12:08:39.656463] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:49.658 [2024-11-27 12:08:39.656550] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.658 [2024-11-27 12:08:39.656564] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.658 [2024-11-27 12:08:39.656577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.658 [2024-11-27 12:08:39.656590] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.658 [2024-11-27 12:08:39.656602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.658 [2024-11-27 12:08:39.656614] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.658 [2024-11-27 12:08:39.656624] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.658 [2024-11-27 12:08:39.656637] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.658 [2024-11-27 12:08:39.656646] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.658 [2024-11-27 12:08:39.656657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.656667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.658 [2024-11-27 12:08:39.656679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:49.658 [2024-11-27 12:08:39.656689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.656760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.658 [2024-11-27 12:08:39.656771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.658 [2024-11-27 12:08:39.656781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:49.658 [2024-11-27 12:08:39.656791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.658 [2024-11-27 12:08:39.656885] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.658 [2024-11-27 12:08:39.656899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.658 [2024-11-27 12:08:39.656910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.658 [2024-11-27 12:08:39.656920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.658 [2024-11-27 12:08:39.656931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.658 [2024-11-27 12:08:39.656941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.656950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.659 [2024-11-27 12:08:39.656960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.659 [2024-11-27 12:08:39.656969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.659 [2024-11-27 12:08:39.656978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.659 [2024-11-27 12:08:39.656987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.659 [2024-11-27 12:08:39.656996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.659 [2024-11-27 12:08:39.657005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.659 [2024-11-27 12:08:39.657023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.659 [2024-11-27 12:08:39.657033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.659 [2024-11-27 12:08:39.657042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.659 [2024-11-27 12:08:39.657060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.659 [2024-11-27 12:08:39.657087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.659 [2024-11-27 12:08:39.657114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.659 [2024-11-27 12:08:39.657141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.659 [2024-11-27 12:08:39.657168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.659 [2024-11-27 12:08:39.657195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.659 [2024-11-27 12:08:39.657213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.659 [2024-11-27 12:08:39.657222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.659 [2024-11-27 12:08:39.657230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.659 [2024-11-27 12:08:39.657240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.659 [2024-11-27 12:08:39.657249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.659 [2024-11-27 12:08:39.657258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.659 [2024-11-27 12:08:39.657275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.659 [2024-11-27 12:08:39.657284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.659 [2024-11-27 12:08:39.657302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.659 [2024-11-27 12:08:39.657311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.659 [2024-11-27 12:08:39.657330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.659 [2024-11-27 12:08:39.657339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.659 [2024-11-27 12:08:39.657348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.659 [2024-11-27 12:08:39.657357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.659 [2024-11-27 12:08:39.657367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.659 [2024-11-27 12:08:39.657388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.659 [2024-11-27 12:08:39.657399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.659 [2024-11-27 12:08:39.657411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.659 [2024-11-27 12:08:39.657436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.659 [2024-11-27 12:08:39.657448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.659 [2024-11-27 12:08:39.657458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.659 [2024-11-27 12:08:39.657468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.659 [2024-11-27 12:08:39.657478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.659 [2024-11-27 12:08:39.657488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.659 [2024-11-27 12:08:39.657497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.659 [2024-11-27 12:08:39.657507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.659 [2024-11-27 12:08:39.657517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.659 [2024-11-27 12:08:39.657572] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.659 [2024-11-27 12:08:39.657582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.659 [2024-11-27 12:08:39.657603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.659 [2024-11-27 12:08:39.657613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.659 [2024-11-27 12:08:39.657623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.659 [2024-11-27 12:08:39.657633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.659 [2024-11-27 12:08:39.657643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.659 [2024-11-27 12:08:39.657653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:24:49.659 [2024-11-27 12:08:39.657662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.659 [2024-11-27 12:08:39.696883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.659 [2024-11-27 12:08:39.696922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.659 [2024-11-27 12:08:39.696935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.222 ms 00:24:49.659 [2024-11-27 12:08:39.696949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.659 [2024-11-27 12:08:39.697026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.659 [2024-11-27 12:08:39.697038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.659 [2024-11-27 12:08:39.697048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:49.659 [2024-11-27 12:08:39.697058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.919 [2024-11-27 12:08:39.771173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.771213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.920 [2024-11-27 12:08:39.771227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.179 ms 00:24:49.920 [2024-11-27 12:08:39.771237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.771277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.771288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.920 [2024-11-27 12:08:39.771303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:49.920 [2024-11-27 12:08:39.771312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.771835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.771860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.920 [2024-11-27 12:08:39.771872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:24:49.920 [2024-11-27 12:08:39.771881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.771998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.772011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.920 [2024-11-27 12:08:39.772027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:49.920 [2024-11-27 12:08:39.772037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.791042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.791081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.920 [2024-11-27 12:08:39.791094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.015 ms 00:24:49.920 [2024-11-27 12:08:39.791104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.808826] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:49.920 [2024-11-27 12:08:39.808863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.920 [2024-11-27 12:08:39.808878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.808888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.920 [2024-11-27 12:08:39.808898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.704 ms 00:24:49.920 [2024-11-27 12:08:39.808909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.836351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.836407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.920 [2024-11-27 12:08:39.836421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.446 ms 00:24:49.920 [2024-11-27 12:08:39.836432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.853685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.853725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.920 [2024-11-27 12:08:39.853737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.223 ms 00:24:49.920 [2024-11-27 12:08:39.853747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.870671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.870709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.920 [2024-11-27 12:08:39.870721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.896 ms 00:24:49.920 [2024-11-27 12:08:39.870731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.871459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.871485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.920 [2024-11-27 12:08:39.871501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:24:49.920 [2024-11-27 12:08:39.871511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.952666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.952724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.920 [2024-11-27 12:08:39.952746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.264 ms 00:24:49.920 [2024-11-27 12:08:39.952756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.962856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.920 [2024-11-27 12:08:39.965292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.965323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.920 [2024-11-27 12:08:39.965353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.505 ms 00:24:49.920 [2024-11-27 12:08:39.965364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.965452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.965467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.920 [2024-11-27 12:08:39.965482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.920 [2024-11-27 12:08:39.965493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.965565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.965578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.920 [2024-11-27 12:08:39.965589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:49.920 [2024-11-27 12:08:39.965600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.965621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.965632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.920 [2024-11-27 12:08:39.965643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.920 [2024-11-27 12:08:39.965653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.920 [2024-11-27 12:08:39.965721] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.920 [2024-11-27 12:08:39.965735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.920 [2024-11-27 12:08:39.965745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.920 [2024-11-27 12:08:39.965756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:49.920 [2024-11-27 12:08:39.965767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.179 [2024-11-27 12:08:40.000865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.179 [2024-11-27 12:08:40.000910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:50.179 [2024-11-27 12:08:40.000931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.132 ms 00:24:50.179 [2024-11-27 12:08:40.000942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.179 [2024-11-27 12:08:40.001017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.179 [2024-11-27 12:08:40.001032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:50.179 [2024-11-27 12:08:40.001043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:50.179 [2024-11-27 12:08:40.001054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.179 [2024-11-27 12:08:40.003704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.352 ms, result 0 00:24:51.559  [2024-11-27T12:08:42.551Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-27T12:08:43.490Z] Copying: 50/1024 [MB] (25 MBps) [2024-11-27T12:08:44.428Z] Copying: 76/1024 [MB] (25 MBps) [2024-11-27T12:08:45.367Z] Copying: 101/1024 [MB] (24 MBps) [2024-11-27T12:08:46.312Z] Copying: 127/1024 [MB] (25 MBps) [2024-11-27T12:08:47.249Z] Copying: 153/1024 [MB] (26 MBps) [2024-11-27T12:08:48.628Z] Copying: 179/1024 [MB] (26 MBps) [2024-11-27T12:08:49.565Z] Copying: 205/1024 [MB] (25 MBps) [2024-11-27T12:08:50.504Z] Copying: 230/1024 [MB] (25 MBps) [2024-11-27T12:08:51.442Z] Copying: 256/1024 [MB] (25 MBps) [2024-11-27T12:08:52.382Z] Copying: 282/1024 [MB] (26 MBps) [2024-11-27T12:08:53.319Z] Copying: 308/1024 [MB] (26 MBps) [2024-11-27T12:08:54.257Z] Copying: 334/1024 [MB] (26 MBps) [2024-11-27T12:08:55.636Z] Copying: 360/1024 [MB] (26 MBps) [2024-11-27T12:08:56.205Z] Copying: 385/1024 [MB] (24 MBps) [2024-11-27T12:08:57.583Z] Copying: 410/1024 [MB] (24 MBps) [2024-11-27T12:08:58.520Z] Copying: 436/1024 [MB] (26 MBps) [2024-11-27T12:08:59.472Z] Copying: 460/1024 [MB] (24 MBps) [2024-11-27T12:09:00.460Z] Copying: 485/1024 [MB] (25 MBps) [2024-11-27T12:09:01.399Z] Copying: 510/1024 [MB] (24 MBps) [2024-11-27T12:09:02.339Z] Copying: 535/1024 [MB] (24 MBps) [2024-11-27T12:09:03.278Z] Copying: 560/1024 [MB] (25 MBps) [2024-11-27T12:09:04.217Z] Copying: 585/1024 [MB] (25 MBps) [2024-11-27T12:09:05.597Z] Copying: 610/1024 [MB] (24 MBps) [2024-11-27T12:09:06.533Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-27T12:09:07.471Z] Copying: 660/1024 [MB] (25 MBps) [2024-11-27T12:09:08.411Z] Copying: 686/1024 [MB] (25 MBps) [2024-11-27T12:09:09.350Z] Copying: 712/1024 [MB] (26 MBps) [2024-11-27T12:09:10.289Z] Copying: 738/1024 [MB] (25 MBps) [2024-11-27T12:09:11.227Z] Copying: 763/1024 [MB] (25 MBps) [2024-11-27T12:09:12.607Z] Copying: 789/1024 [MB] (25 MBps) [2024-11-27T12:09:13.176Z] Copying: 815/1024 [MB] (25 MBps) [2024-11-27T12:09:14.556Z] Copying: 840/1024 [MB] (25 MBps) [2024-11-27T12:09:15.493Z] Copying: 866/1024 [MB] (26 MBps) [2024-11-27T12:09:16.431Z] Copying: 892/1024 [MB] (25 MBps) [2024-11-27T12:09:17.370Z] Copying: 919/1024 [MB] (26 MBps) [2024-11-27T12:09:18.308Z] Copying: 945/1024 [MB] (26 MBps) [2024-11-27T12:09:19.246Z] Copying: 971/1024 [MB] (25 MBps) [2024-11-27T12:09:20.182Z] Copying: 998/1024 [MB] (26 MBps) [2024-11-27T12:09:20.752Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-27 12:09:20.442885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.443275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.699 [2024-11-27 12:09:20.443483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.699 [2024-11-27 12:09:20.443551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.443753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.699 [2024-11-27 12:09:20.448801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.449020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.699 [2024-11-27 12:09:20.449161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.949 ms 00:25:30.699 [2024-11-27 12:09:20.449208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.449642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.449716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.699 [2024-11-27 12:09:20.449755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:25:30.699 [2024-11-27 12:09:20.449878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.452614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.452794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.699 [2024-11-27 12:09:20.452912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:25:30.699 [2024-11-27 12:09:20.452966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.458438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.458623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.699 [2024-11-27 12:09:20.458747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:25:30.699 [2024-11-27 12:09:20.458791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.498942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.499146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.699 [2024-11-27 12:09:20.499247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.110 ms 00:25:30.699 [2024-11-27 12:09:20.499291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.519846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.520013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.699 [2024-11-27 12:09:20.520132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.537 ms 00:25:30.699 [2024-11-27 12:09:20.520176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.699 [2024-11-27 12:09:20.520339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.699 [2024-11-27 12:09:20.520578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.700 [2024-11-27 12:09:20.520624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:30.700 [2024-11-27 12:09:20.520654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.700 [2024-11-27 12:09:20.556159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.700 [2024-11-27 12:09:20.556298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:30.700 [2024-11-27 12:09:20.556449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.519 ms 00:25:30.700 [2024-11-27 12:09:20.556490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.700 [2024-11-27 12:09:20.590965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.700 [2024-11-27 12:09:20.591101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.700 [2024-11-27 12:09:20.591235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.484 ms 00:25:30.700 [2024-11-27 12:09:20.591276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.700 [2024-11-27 12:09:20.624591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.700 [2024-11-27 12:09:20.624744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.700 [2024-11-27 12:09:20.624853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.323 ms 00:25:30.700 [2024-11-27 12:09:20.624893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.700 [2024-11-27 12:09:20.658852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.700 [2024-11-27 12:09:20.659013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.700 [2024-11-27 12:09:20.659118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.929 ms 00:25:30.700 [2024-11-27 12:09:20.659158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.700 [2024-11-27 12:09:20.659202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.700 [2024-11-27 12:09:20.659244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.659996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.700 [2024-11-27 12:09:20.660162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.701 [2024-11-27 12:09:20.660466] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.701 [2024-11-27 12:09:20.660476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46f1d584-88ce-4301-a84d-84f52c9539f7 00:25:30.701 [2024-11-27 12:09:20.660487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:30.701 [2024-11-27 12:09:20.660497] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.701 [2024-11-27 12:09:20.660523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.701 [2024-11-27 12:09:20.660534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.701 [2024-11-27 12:09:20.660555] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.701 [2024-11-27 12:09:20.660565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.701 [2024-11-27 12:09:20.660576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.701 [2024-11-27 12:09:20.660585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.701 [2024-11-27 12:09:20.660594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.701 [2024-11-27 12:09:20.660605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.701 [2024-11-27 12:09:20.660616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.701 [2024-11-27 12:09:20.660627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:25:30.701 [2024-11-27 12:09:20.660641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.680394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.701 [2024-11-27 12:09:20.680423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.701 [2024-11-27 12:09:20.680437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.743 ms 00:25:30.701 [2024-11-27 12:09:20.680446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.681019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.701 [2024-11-27 12:09:20.681039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.701 [2024-11-27 12:09:20.681057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:25:30.701 [2024-11-27 12:09:20.681067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.732181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.701 [2024-11-27 12:09:20.732219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.701 [2024-11-27 12:09:20.732249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.701 [2024-11-27 12:09:20.732261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.732317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.701 [2024-11-27 12:09:20.732328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.701 [2024-11-27 12:09:20.732344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.701 [2024-11-27 12:09:20.732354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.732432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.701 [2024-11-27 12:09:20.732446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.701 [2024-11-27 12:09:20.732457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.701 [2024-11-27 12:09:20.732467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.701 [2024-11-27 12:09:20.732494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.701 [2024-11-27 12:09:20.732504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.701 [2024-11-27 12:09:20.732514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.701 [2024-11-27 12:09:20.732528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.852500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.852546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.961 [2024-11-27 12:09:20.852561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.852572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.947754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.947803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.961 [2024-11-27 12:09:20.947823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.947833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.947942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.947954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.961 [2024-11-27 12:09:20.947965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.947976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.948023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.961 [2024-11-27 12:09:20.948033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.948043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.948174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.961 [2024-11-27 12:09:20.948191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.948201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.948255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.961 [2024-11-27 12:09:20.948265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.948275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.948328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.961 [2024-11-27 12:09:20.948345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.948360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.961 [2024-11-27 12:09:20.948444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.961 [2024-11-27 12:09:20.948454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.961 [2024-11-27 12:09:20.948463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.961 [2024-11-27 12:09:20.948621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.500 ms, result 0 00:25:31.900 00:25:31.900 00:25:32.158 12:09:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:34.065 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:34.065 12:09:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:34.065 [2024-11-27 12:09:23.687695] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:34.065 [2024-11-27 12:09:23.687825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80136 ] 00:25:34.065 [2024-11-27 12:09:23.863966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.065 [2024-11-27 12:09:23.976095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.324 [2024-11-27 12:09:24.318215] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.324 [2024-11-27 12:09:24.318276] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.584 [2024-11-27 12:09:24.478320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.478380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.585 [2024-11-27 12:09:24.478396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:34.585 [2024-11-27 12:09:24.478423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.478470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.478484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.585 [2024-11-27 12:09:24.478495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:34.585 [2024-11-27 12:09:24.478504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.478526] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.585 [2024-11-27 12:09:24.479509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.585 [2024-11-27 12:09:24.479540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.479552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.585 [2024-11-27 12:09:24.479563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:25:34.585 [2024-11-27 12:09:24.479573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.481030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:34.585 [2024-11-27 12:09:24.499535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.499573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:34.585 [2024-11-27 12:09:24.499587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.536 ms 00:25:34.585 [2024-11-27 12:09:24.499597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.499675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.499688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:34.585 [2024-11-27 12:09:24.499699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:34.585 [2024-11-27 12:09:24.499708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.506558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.506590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.585 [2024-11-27 12:09:24.506601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:25:34.585 [2024-11-27 12:09:24.506616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.506709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.506722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.585 [2024-11-27 12:09:24.506732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:34.585 [2024-11-27 12:09:24.506742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.506784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.506795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.585 [2024-11-27 12:09:24.506806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:34.585 [2024-11-27 12:09:24.506815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.506842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.585 [2024-11-27 12:09:24.511465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.511497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.585 [2024-11-27 12:09:24.511512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.635 ms 00:25:34.585 [2024-11-27 12:09:24.511539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.511568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.511578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.585 [2024-11-27 12:09:24.511589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.585 [2024-11-27 12:09:24.511598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.511653] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:34.585 [2024-11-27 12:09:24.511676] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:34.585 [2024-11-27 12:09:24.511710] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:34.585 [2024-11-27 12:09:24.511731] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:34.585 [2024-11-27 12:09:24.511827] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.585 [2024-11-27 12:09:24.511842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.585 [2024-11-27 12:09:24.511854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.585 [2024-11-27 12:09:24.511867] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.585 [2024-11-27 12:09:24.511879] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.585 [2024-11-27 12:09:24.511889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:34.585 [2024-11-27 12:09:24.511899] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.585 [2024-11-27 12:09:24.511912] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.585 [2024-11-27 12:09:24.511921] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.585 [2024-11-27 12:09:24.511936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.511953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.585 [2024-11-27 12:09:24.511965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:25:34.585 [2024-11-27 12:09:24.511975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.512046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.585 [2024-11-27 12:09:24.512059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.585 [2024-11-27 12:09:24.512069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:34.585 [2024-11-27 12:09:24.512079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.585 [2024-11-27 12:09:24.512173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.585 [2024-11-27 12:09:24.512188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.585 [2024-11-27 12:09:24.512199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.585 [2024-11-27 12:09:24.512250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.585 [2024-11-27 12:09:24.512284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.585 [2024-11-27 12:09:24.512302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.585 [2024-11-27 12:09:24.512311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:34.585 [2024-11-27 12:09:24.512326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.585 [2024-11-27 12:09:24.512349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.585 [2024-11-27 12:09:24.512358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:34.585 [2024-11-27 12:09:24.512367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.585 [2024-11-27 12:09:24.512404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.585 [2024-11-27 12:09:24.512432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.585 [2024-11-27 12:09:24.512460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.585 [2024-11-27 12:09:24.512487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.585 [2024-11-27 12:09:24.512514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.585 [2024-11-27 12:09:24.512532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.585 [2024-11-27 12:09:24.512541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:34.585 [2024-11-27 12:09:24.512551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.585 [2024-11-27 12:09:24.512560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.585 [2024-11-27 12:09:24.512569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:34.585 [2024-11-27 12:09:24.512583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.585 [2024-11-27 12:09:24.512599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.585 [2024-11-27 12:09:24.512615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:34.585 [2024-11-27 12:09:24.512631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.586 [2024-11-27 12:09:24.512643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.586 [2024-11-27 12:09:24.512652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:34.586 [2024-11-27 12:09:24.512661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.586 [2024-11-27 12:09:24.512670] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.586 [2024-11-27 12:09:24.512680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.586 [2024-11-27 12:09:24.512690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.586 [2024-11-27 12:09:24.512699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.586 [2024-11-27 12:09:24.512709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.586 [2024-11-27 12:09:24.512718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.586 [2024-11-27 12:09:24.512727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.586 [2024-11-27 12:09:24.512735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.586 [2024-11-27 12:09:24.512744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.586 [2024-11-27 12:09:24.512753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.586 [2024-11-27 12:09:24.512770] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.586 [2024-11-27 12:09:24.512785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:34.586 [2024-11-27 12:09:24.512811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:34.586 [2024-11-27 12:09:24.512821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:34.586 [2024-11-27 12:09:24.512831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:34.586 [2024-11-27 12:09:24.512842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:34.586 [2024-11-27 12:09:24.512852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:34.586 [2024-11-27 12:09:24.512861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:34.586 [2024-11-27 12:09:24.512871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:34.586 [2024-11-27 12:09:24.512881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:34.586 [2024-11-27 12:09:24.512890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:34.586 [2024-11-27 12:09:24.512958] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.586 [2024-11-27 12:09:24.512968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.586 [2024-11-27 12:09:24.512989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.586 [2024-11-27 12:09:24.512999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.586 [2024-11-27 12:09:24.513009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.586 [2024-11-27 12:09:24.513019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.513030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.586 [2024-11-27 12:09:24.513040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:25:34.586 [2024-11-27 12:09:24.513050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.548480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.548515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.586 [2024-11-27 12:09:24.548528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.439 ms 00:25:34.586 [2024-11-27 12:09:24.548543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.548634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.548645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:34.586 [2024-11-27 12:09:24.548656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:34.586 [2024-11-27 12:09:24.548666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.622412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.622450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.586 [2024-11-27 12:09:24.622464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.806 ms 00:25:34.586 [2024-11-27 12:09:24.622491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.622533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.622546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.586 [2024-11-27 12:09:24.622561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:34.586 [2024-11-27 12:09:24.622571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.623099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.623134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.586 [2024-11-27 12:09:24.623150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:25:34.586 [2024-11-27 12:09:24.623161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.586 [2024-11-27 12:09:24.623282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.586 [2024-11-27 12:09:24.623298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.586 [2024-11-27 12:09:24.623324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:34.586 [2024-11-27 12:09:24.623336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.643179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.643216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.846 [2024-11-27 12:09:24.643237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.839 ms 00:25:34.846 [2024-11-27 12:09:24.643247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.661817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:34.846 [2024-11-27 12:09:24.661856] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:34.846 [2024-11-27 12:09:24.661870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.661880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:34.846 [2024-11-27 12:09:24.661891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.533 ms 00:25:34.846 [2024-11-27 12:09:24.661917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.689947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.689988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:34.846 [2024-11-27 12:09:24.690002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.033 ms 00:25:34.846 [2024-11-27 12:09:24.690014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.707615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.707650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:34.846 [2024-11-27 12:09:24.707662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.557 ms 00:25:34.846 [2024-11-27 12:09:24.707671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.724651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.724685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:34.846 [2024-11-27 12:09:24.724697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.954 ms 00:25:34.846 [2024-11-27 12:09:24.724706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.725491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.725518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.846 [2024-11-27 12:09:24.725533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:25:34.846 [2024-11-27 12:09:24.725543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.805537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.805598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:34.846 [2024-11-27 12:09:24.805619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.101 ms 00:25:34.846 [2024-11-27 12:09:24.805646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.816137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:34.846 [2024-11-27 12:09:24.818434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.818467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:34.846 [2024-11-27 12:09:24.818480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.761 ms 00:25:34.846 [2024-11-27 12:09:24.818502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.818600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.818613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:34.846 [2024-11-27 12:09:24.818628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:34.846 [2024-11-27 12:09:24.818638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.818728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.818742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.846 [2024-11-27 12:09:24.818753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:34.846 [2024-11-27 12:09:24.818763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.818787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.818798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.846 [2024-11-27 12:09:24.818808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.846 [2024-11-27 12:09:24.818818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.818861] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:34.846 [2024-11-27 12:09:24.818881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.818892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:34.846 [2024-11-27 12:09:24.818902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:34.846 [2024-11-27 12:09:24.818912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.855989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.856028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.846 [2024-11-27 12:09:24.856065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.114 ms 00:25:34.846 [2024-11-27 12:09:24.856076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.846 [2024-11-27 12:09:24.856154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.846 [2024-11-27 12:09:24.856167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.846 [2024-11-27 12:09:24.856177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:34.847 [2024-11-27 12:09:24.856187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.847 [2024-11-27 12:09:24.857390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.127 ms, result 0 00:25:36.226  [2024-11-27T12:09:27.217Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T12:09:28.151Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-27T12:09:29.086Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-27T12:09:30.019Z] Copying: 100/1024 [MB] (25 MBps) [2024-11-27T12:09:30.973Z] Copying: 125/1024 [MB] (24 MBps) [2024-11-27T12:09:31.957Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-27T12:09:32.896Z] Copying: 175/1024 [MB] (24 MBps) [2024-11-27T12:09:34.275Z] Copying: 199/1024 [MB] (23 MBps) [2024-11-27T12:09:35.213Z] Copying: 223/1024 [MB] (23 MBps) [2024-11-27T12:09:36.150Z] Copying: 248/1024 [MB] (24 MBps) [2024-11-27T12:09:37.086Z] Copying: 273/1024 [MB] (25 MBps) [2024-11-27T12:09:38.020Z] Copying: 299/1024 [MB] (25 MBps) [2024-11-27T12:09:38.958Z] Copying: 324/1024 [MB] (25 MBps) [2024-11-27T12:09:39.894Z] Copying: 349/1024 [MB] (25 MBps) [2024-11-27T12:09:41.274Z] Copying: 374/1024 [MB] (24 MBps) [2024-11-27T12:09:42.211Z] Copying: 399/1024 [MB] (24 MBps) [2024-11-27T12:09:43.147Z] Copying: 424/1024 [MB] (25 MBps) [2024-11-27T12:09:44.083Z] Copying: 450/1024 [MB] (25 MBps) [2024-11-27T12:09:45.018Z] Copying: 474/1024 [MB] (24 MBps) [2024-11-27T12:09:45.952Z] Copying: 499/1024 [MB] (25 MBps) [2024-11-27T12:09:46.888Z] Copying: 524/1024 [MB] (24 MBps) [2024-11-27T12:09:48.265Z] Copying: 549/1024 [MB] (25 MBps) [2024-11-27T12:09:48.832Z] Copying: 574/1024 [MB] (25 MBps) [2024-11-27T12:09:50.209Z] Copying: 599/1024 [MB] (24 MBps) [2024-11-27T12:09:51.145Z] Copying: 624/1024 [MB] (25 MBps) [2024-11-27T12:09:52.081Z] Copying: 649/1024 [MB] (24 MBps) [2024-11-27T12:09:53.019Z] Copying: 674/1024 [MB] (24 MBps) [2024-11-27T12:09:53.956Z] Copying: 699/1024 [MB] (24 MBps) [2024-11-27T12:09:54.893Z] Copying: 723/1024 [MB] (24 MBps) [2024-11-27T12:09:55.831Z] Copying: 748/1024 [MB] (25 MBps) [2024-11-27T12:09:57.208Z] Copying: 774/1024 [MB] (25 MBps) [2024-11-27T12:09:58.145Z] Copying: 799/1024 [MB] (25 MBps) [2024-11-27T12:09:59.083Z] Copying: 824/1024 [MB] (25 MBps) [2024-11-27T12:10:00.021Z] Copying: 849/1024 [MB] (24 MBps) [2024-11-27T12:10:00.959Z] Copying: 874/1024 [MB] (24 MBps) [2024-11-27T12:10:01.896Z] Copying: 898/1024 [MB] (24 MBps) [2024-11-27T12:10:02.892Z] Copying: 922/1024 [MB] (23 MBps) [2024-11-27T12:10:03.837Z] Copying: 947/1024 [MB] (24 MBps) [2024-11-27T12:10:05.218Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-27T12:10:06.151Z] Copying: 996/1024 [MB] (24 MBps) [2024-11-27T12:10:06.719Z] Copying: 1021/1024 [MB] (25 MBps) [2024-11-27T12:10:06.719Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 12:10:06.632115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.632178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.666 [2024-11-27 12:10:06.632203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.666 [2024-11-27 12:10:06.632214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.666 [2024-11-27 12:10:06.633008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.666 [2024-11-27 12:10:06.638922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.638963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.666 [2024-11-27 12:10:06.638993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.893 ms 00:26:16.666 [2024-11-27 12:10:06.639003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.666 [2024-11-27 12:10:06.650594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.650635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.666 [2024-11-27 12:10:06.650666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.009 ms 00:26:16.666 [2024-11-27 12:10:06.650684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.666 [2024-11-27 12:10:06.674459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.674528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.666 [2024-11-27 12:10:06.674559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.795 ms 00:26:16.666 [2024-11-27 12:10:06.674583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.666 [2024-11-27 12:10:06.679720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.679756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.666 [2024-11-27 12:10:06.679768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.110 ms 00:26:16.666 [2024-11-27 12:10:06.679786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.666 [2024-11-27 12:10:06.716728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.666 [2024-11-27 12:10:06.716768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.666 [2024-11-27 12:10:06.716782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.961 ms 00:26:16.666 [2024-11-27 12:10:06.716792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.925 [2024-11-27 12:10:06.737561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.925 [2024-11-27 12:10:06.737597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.925 [2024-11-27 12:10:06.737627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.766 ms 00:26:16.925 [2024-11-27 12:10:06.737637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.925 [2024-11-27 12:10:06.850872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.925 [2024-11-27 12:10:06.850914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.925 [2024-11-27 12:10:06.850929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.375 ms 00:26:16.925 [2024-11-27 12:10:06.850940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.925 [2024-11-27 12:10:06.887238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.925 [2024-11-27 12:10:06.887273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.925 [2024-11-27 12:10:06.887303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.339 ms 00:26:16.925 [2024-11-27 12:10:06.887314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.925 [2024-11-27 12:10:06.922280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.925 [2024-11-27 12:10:06.922314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.925 [2024-11-27 12:10:06.922344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.984 ms 00:26:16.925 [2024-11-27 12:10:06.922354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.925 [2024-11-27 12:10:06.955979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.925 [2024-11-27 12:10:06.956013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.925 [2024-11-27 12:10:06.956043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.634 ms 00:26:16.925 [2024-11-27 12:10:06.956053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-11-27 12:10:06.991275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-11-27 12:10:06.991312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.186 [2024-11-27 12:10:06.991324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.203 ms 00:26:17.186 [2024-11-27 12:10:06.991350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-11-27 12:10:06.991401] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.186 [2024-11-27 12:10:06.991417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:26:17.186 [2024-11-27 12:10:06.991430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.991993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-11-27 12:10:06.992222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-11-27 12:10:06.992594] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.187 [2024-11-27 12:10:06.992604] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46f1d584-88ce-4301-a84d-84f52c9539f7 00:26:17.187 [2024-11-27 12:10:06.992615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:26:17.187 [2024-11-27 12:10:06.992624] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:26:17.187 [2024-11-27 12:10:06.992634] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:26:17.187 [2024-11-27 12:10:06.992644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:26:17.187 [2024-11-27 12:10:06.992670] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.187 [2024-11-27 12:10:06.992680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.187 [2024-11-27 12:10:06.992689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.187 [2024-11-27 12:10:06.992698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.187 [2024-11-27 12:10:06.992706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.187 [2024-11-27 12:10:06.992716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-11-27 12:10:06.992726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.187 [2024-11-27 12:10:06.992737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:26:17.187 [2024-11-27 12:10:06.992746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.012530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-11-27 12:10:07.012563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.187 [2024-11-27 12:10:07.012597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.762 ms 00:26:17.187 [2024-11-27 12:10:07.012608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.013133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-11-27 12:10:07.013152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.187 [2024-11-27 12:10:07.013163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:26:17.187 [2024-11-27 12:10:07.013173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.062047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.187 [2024-11-27 12:10:07.062085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.187 [2024-11-27 12:10:07.062098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.187 [2024-11-27 12:10:07.062124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.062178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.187 [2024-11-27 12:10:07.062189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.187 [2024-11-27 12:10:07.062199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.187 [2024-11-27 12:10:07.062209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.062275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.187 [2024-11-27 12:10:07.062292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.187 [2024-11-27 12:10:07.062302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.187 [2024-11-27 12:10:07.062316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.062343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.187 [2024-11-27 12:10:07.062360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.187 [2024-11-27 12:10:07.062388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.187 [2024-11-27 12:10:07.062399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-11-27 12:10:07.179072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.187 [2024-11-27 12:10:07.179129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.187 [2024-11-27 12:10:07.179143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.187 [2024-11-27 12:10:07.179170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.447 [2024-11-27 12:10:07.277344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.447 [2024-11-27 12:10:07.277491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.447 [2024-11-27 12:10:07.277564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.447 [2024-11-27 12:10:07.277731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.447 [2024-11-27 12:10:07.277819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.447 [2024-11-27 12:10:07.277896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.277963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-11-27 12:10:07.277974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.447 [2024-11-27 12:10:07.277985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-11-27 12:10:07.277995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-11-27 12:10:07.278118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.273 ms, result 0 00:26:18.842 00:26:18.842 00:26:19.101 12:10:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:19.101 [2024-11-27 12:10:08.984414] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:19.101 [2024-11-27 12:10:08.984544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80593 ] 00:26:19.361 [2024-11-27 12:10:09.160463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.361 [2024-11-27 12:10:09.275588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.621 [2024-11-27 12:10:09.625108] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.621 [2024-11-27 12:10:09.625175] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.882 [2024-11-27 12:10:09.785144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.882 [2024-11-27 12:10:09.785203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:19.882 [2024-11-27 12:10:09.785219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:19.882 [2024-11-27 12:10:09.785229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.882 [2024-11-27 12:10:09.785291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.882 [2024-11-27 12:10:09.785305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.882 [2024-11-27 12:10:09.785316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:19.882 [2024-11-27 12:10:09.785326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.882 [2024-11-27 12:10:09.785346] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:19.882 [2024-11-27 12:10:09.786286] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:19.882 [2024-11-27 12:10:09.786320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.882 [2024-11-27 12:10:09.786330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.882 [2024-11-27 12:10:09.786341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:26:19.882 [2024-11-27 12:10:09.786351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.882 [2024-11-27 12:10:09.787840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:19.882 [2024-11-27 12:10:09.806864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.882 [2024-11-27 12:10:09.806902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:19.882 [2024-11-27 12:10:09.806933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.055 ms 00:26:19.882 [2024-11-27 12:10:09.806943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.882 [2024-11-27 12:10:09.807007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.882 [2024-11-27 12:10:09.807021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:19.882 [2024-11-27 12:10:09.807031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:19.882 [2024-11-27 12:10:09.807041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.882 [2024-11-27 12:10:09.813955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.813986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.883 [2024-11-27 12:10:09.814014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:26:19.883 [2024-11-27 12:10:09.814028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.814106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.814120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.883 [2024-11-27 12:10:09.814131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:19.883 [2024-11-27 12:10:09.814141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.814181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.814193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:19.883 [2024-11-27 12:10:09.814203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:19.883 [2024-11-27 12:10:09.814212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.814239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:19.883 [2024-11-27 12:10:09.818929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.818960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.883 [2024-11-27 12:10:09.818991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:26:19.883 [2024-11-27 12:10:09.819001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.819032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.819042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:19.883 [2024-11-27 12:10:09.819052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:19.883 [2024-11-27 12:10:09.819062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.819116] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:19.883 [2024-11-27 12:10:09.819140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:19.883 [2024-11-27 12:10:09.819173] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:19.883 [2024-11-27 12:10:09.819194] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:19.883 [2024-11-27 12:10:09.819300] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:19.883 [2024-11-27 12:10:09.819322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:19.883 [2024-11-27 12:10:09.819336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:19.883 [2024-11-27 12:10:09.819349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:19.883 [2024-11-27 12:10:09.819382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:19.883 [2024-11-27 12:10:09.819412] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:19.883 [2024-11-27 12:10:09.819422] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:19.883 [2024-11-27 12:10:09.819432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.819443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:19.883 [2024-11-27 12:10:09.819454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:26:19.883 [2024-11-27 12:10:09.819464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.819552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.883 [2024-11-27 12:10:09.819568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:19.883 [2024-11-27 12:10:09.819578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:19.883 [2024-11-27 12:10:09.819588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.883 [2024-11-27 12:10:09.819684] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:19.883 [2024-11-27 12:10:09.819707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:19.883 [2024-11-27 12:10:09.819718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:19.883 [2024-11-27 12:10:09.819749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:19.883 [2024-11-27 12:10:09.819777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.883 [2024-11-27 12:10:09.819798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:19.883 [2024-11-27 12:10:09.819814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:19.883 [2024-11-27 12:10:09.819831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.883 [2024-11-27 12:10:09.819852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:19.883 [2024-11-27 12:10:09.819862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:19.883 [2024-11-27 12:10:09.819871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:19.883 [2024-11-27 12:10:09.819889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:19.883 [2024-11-27 12:10:09.819918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:19.883 [2024-11-27 12:10:09.819958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.883 [2024-11-27 12:10:09.819976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:19.883 [2024-11-27 12:10:09.819985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:19.883 [2024-11-27 12:10:09.819994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.883 [2024-11-27 12:10:09.820003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:19.883 [2024-11-27 12:10:09.820011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:19.883 [2024-11-27 12:10:09.820020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.883 [2024-11-27 12:10:09.820029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:19.883 [2024-11-27 12:10:09.820038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:19.883 [2024-11-27 12:10:09.820047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.883 [2024-11-27 12:10:09.820056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:19.883 [2024-11-27 12:10:09.820065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:19.883 [2024-11-27 12:10:09.820074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.883 [2024-11-27 12:10:09.820082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:19.883 [2024-11-27 12:10:09.820091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:19.883 [2024-11-27 12:10:09.820100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.884 [2024-11-27 12:10:09.820108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:19.884 [2024-11-27 12:10:09.820117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:19.884 [2024-11-27 12:10:09.820126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.884 [2024-11-27 12:10:09.820134] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:19.884 [2024-11-27 12:10:09.820144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:19.884 [2024-11-27 12:10:09.820154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.884 [2024-11-27 12:10:09.820170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.884 [2024-11-27 12:10:09.820185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:19.884 [2024-11-27 12:10:09.820202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:19.884 [2024-11-27 12:10:09.820217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:19.884 [2024-11-27 12:10:09.820226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:19.884 [2024-11-27 12:10:09.820234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:19.884 [2024-11-27 12:10:09.820243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:19.884 [2024-11-27 12:10:09.820254] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:19.884 [2024-11-27 12:10:09.820266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:19.884 [2024-11-27 12:10:09.820291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:19.884 [2024-11-27 12:10:09.820302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:19.884 [2024-11-27 12:10:09.820312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:19.884 [2024-11-27 12:10:09.820321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:19.884 [2024-11-27 12:10:09.820331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:19.884 [2024-11-27 12:10:09.820342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:19.884 [2024-11-27 12:10:09.820351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:19.884 [2024-11-27 12:10:09.820379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:19.884 [2024-11-27 12:10:09.820389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:19.884 [2024-11-27 12:10:09.820458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:19.884 [2024-11-27 12:10:09.820470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:19.884 [2024-11-27 12:10:09.820508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:19.884 [2024-11-27 12:10:09.820524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:19.884 [2024-11-27 12:10:09.820542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:19.884 [2024-11-27 12:10:09.820560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.820572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:19.884 [2024-11-27 12:10:09.820583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:26:19.884 [2024-11-27 12:10:09.820593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.859837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.859872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.884 [2024-11-27 12:10:09.859885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.256 ms 00:26:19.884 [2024-11-27 12:10:09.859900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.859975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.859986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:19.884 [2024-11-27 12:10:09.859997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:19.884 [2024-11-27 12:10:09.860006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.914758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.914793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.884 [2024-11-27 12:10:09.914806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.782 ms 00:26:19.884 [2024-11-27 12:10:09.914815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.914866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.914878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.884 [2024-11-27 12:10:09.914892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:19.884 [2024-11-27 12:10:09.914902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.915417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.915440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.884 [2024-11-27 12:10:09.915451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:26:19.884 [2024-11-27 12:10:09.915461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.884 [2024-11-27 12:10:09.915579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.884 [2024-11-27 12:10:09.915592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.884 [2024-11-27 12:10:09.915608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:19.884 [2024-11-27 12:10:09.915622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:09.934467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:09.934505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:20.144 [2024-11-27 12:10:09.934519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:26:20.144 [2024-11-27 12:10:09.934529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:09.953175] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:20.144 [2024-11-27 12:10:09.953213] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:20.144 [2024-11-27 12:10:09.953244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:09.953255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:20.144 [2024-11-27 12:10:09.953266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.643 ms 00:26:20.144 [2024-11-27 12:10:09.953276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:09.981449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:09.981485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:20.144 [2024-11-27 12:10:09.981499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.176 ms 00:26:20.144 [2024-11-27 12:10:09.981509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:09.998999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:09.999044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:20.144 [2024-11-27 12:10:09.999057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.457 ms 00:26:20.144 [2024-11-27 12:10:09.999066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.016995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.017029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:20.144 [2024-11-27 12:10:10.017041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:26:20.144 [2024-11-27 12:10:10.017050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.017837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.017866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:20.144 [2024-11-27 12:10:10.017882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:26:20.144 [2024-11-27 12:10:10.017892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.100087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.100158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:20.144 [2024-11-27 12:10:10.100181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.304 ms 00:26:20.144 [2024-11-27 12:10:10.100192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.110335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:20.144 [2024-11-27 12:10:10.112687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.112716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:20.144 [2024-11-27 12:10:10.112728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.452 ms 00:26:20.144 [2024-11-27 12:10:10.112738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.112829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.112842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:20.144 [2024-11-27 12:10:10.112857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:20.144 [2024-11-27 12:10:10.112867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.114428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.114463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:20.144 [2024-11-27 12:10:10.114476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:26:20.144 [2024-11-27 12:10:10.114486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.114516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.114527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:20.144 [2024-11-27 12:10:10.114537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:20.144 [2024-11-27 12:10:10.114547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.114591] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:20.144 [2024-11-27 12:10:10.114604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.114613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:20.144 [2024-11-27 12:10:10.114624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:20.144 [2024-11-27 12:10:10.114634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.144 [2024-11-27 12:10:10.150092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.144 [2024-11-27 12:10:10.150128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:20.144 [2024-11-27 12:10:10.150164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.488 ms 00:26:20.145 [2024-11-27 12:10:10.150175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.145 [2024-11-27 12:10:10.150248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.145 [2024-11-27 12:10:10.150260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:20.145 [2024-11-27 12:10:10.150271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:20.145 [2024-11-27 12:10:10.150280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.145 [2024-11-27 12:10:10.151408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.365 ms, result 0 00:26:21.526  [2024-11-27T12:10:12.532Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-27T12:10:13.471Z] Copying: 46/1024 [MB] (25 MBps) [2024-11-27T12:10:14.411Z] Copying: 72/1024 [MB] (25 MBps) [2024-11-27T12:10:15.789Z] Copying: 98/1024 [MB] (25 MBps) [2024-11-27T12:10:16.729Z] Copying: 123/1024 [MB] (25 MBps) [2024-11-27T12:10:17.668Z] Copying: 149/1024 [MB] (25 MBps) [2024-11-27T12:10:18.607Z] Copying: 174/1024 [MB] (24 MBps) [2024-11-27T12:10:19.544Z] Copying: 199/1024 [MB] (24 MBps) [2024-11-27T12:10:20.481Z] Copying: 225/1024 [MB] (25 MBps) [2024-11-27T12:10:21.420Z] Copying: 251/1024 [MB] (26 MBps) [2024-11-27T12:10:22.358Z] Copying: 277/1024 [MB] (26 MBps) [2024-11-27T12:10:23.738Z] Copying: 304/1024 [MB] (26 MBps) [2024-11-27T12:10:24.684Z] Copying: 330/1024 [MB] (26 MBps) [2024-11-27T12:10:25.622Z] Copying: 357/1024 [MB] (26 MBps) [2024-11-27T12:10:26.560Z] Copying: 383/1024 [MB] (26 MBps) [2024-11-27T12:10:27.498Z] Copying: 408/1024 [MB] (24 MBps) [2024-11-27T12:10:28.437Z] Copying: 432/1024 [MB] (24 MBps) [2024-11-27T12:10:29.378Z] Copying: 456/1024 [MB] (24 MBps) [2024-11-27T12:10:30.754Z] Copying: 480/1024 [MB] (23 MBps) [2024-11-27T12:10:31.690Z] Copying: 506/1024 [MB] (26 MBps) [2024-11-27T12:10:32.628Z] Copying: 532/1024 [MB] (26 MBps) [2024-11-27T12:10:33.566Z] Copying: 558/1024 [MB] (26 MBps) [2024-11-27T12:10:34.591Z] Copying: 584/1024 [MB] (26 MBps) [2024-11-27T12:10:35.529Z] Copying: 610/1024 [MB] (25 MBps) [2024-11-27T12:10:36.466Z] Copying: 635/1024 [MB] (25 MBps) [2024-11-27T12:10:37.403Z] Copying: 661/1024 [MB] (26 MBps) [2024-11-27T12:10:38.340Z] Copying: 687/1024 [MB] (26 MBps) [2024-11-27T12:10:39.719Z] Copying: 713/1024 [MB] (25 MBps) [2024-11-27T12:10:40.656Z] Copying: 739/1024 [MB] (25 MBps) [2024-11-27T12:10:41.593Z] Copying: 764/1024 [MB] (25 MBps) [2024-11-27T12:10:42.530Z] Copying: 790/1024 [MB] (25 MBps) [2024-11-27T12:10:43.465Z] Copying: 815/1024 [MB] (25 MBps) [2024-11-27T12:10:44.398Z] Copying: 841/1024 [MB] (25 MBps) [2024-11-27T12:10:45.332Z] Copying: 867/1024 [MB] (25 MBps) [2024-11-27T12:10:46.711Z] Copying: 892/1024 [MB] (25 MBps) [2024-11-27T12:10:47.647Z] Copying: 917/1024 [MB] (25 MBps) [2024-11-27T12:10:48.585Z] Copying: 943/1024 [MB] (25 MBps) [2024-11-27T12:10:49.520Z] Copying: 969/1024 [MB] (25 MBps) [2024-11-27T12:10:50.456Z] Copying: 995/1024 [MB] (25 MBps) [2024-11-27T12:10:50.456Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-27T12:10:51.026Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-27 12:10:50.761285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.761350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.973 [2024-11-27 12:10:50.761391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:00.973 [2024-11-27 12:10:50.761403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.761452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.973 [2024-11-27 12:10:50.766349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.766398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.973 [2024-11-27 12:10:50.766413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.878 ms 00:27:00.973 [2024-11-27 12:10:50.766425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.766649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.766663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.973 [2024-11-27 12:10:50.766674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:27:00.973 [2024-11-27 12:10:50.766690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.771524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.771569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.973 [2024-11-27 12:10:50.771583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:27:00.973 [2024-11-27 12:10:50.771594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.776824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.776863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.973 [2024-11-27 12:10:50.776876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:27:00.973 [2024-11-27 12:10:50.776893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.815210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.815248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.973 [2024-11-27 12:10:50.815263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.330 ms 00:27:00.973 [2024-11-27 12:10:50.815272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.835992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.836027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.973 [2024-11-27 12:10:50.836040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.695 ms 00:27:00.973 [2024-11-27 12:10:50.836050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:50.985767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:50.985821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.973 [2024-11-27 12:10:50.985836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 149.901 ms 00:27:00.973 [2024-11-27 12:10:50.985847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.973 [2024-11-27 12:10:51.020373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.973 [2024-11-27 12:10:51.020409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:00.973 [2024-11-27 12:10:51.020422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.563 ms 00:27:00.973 [2024-11-27 12:10:51.020432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-11-27 12:10:51.054730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.233 [2024-11-27 12:10:51.054764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:01.233 [2024-11-27 12:10:51.054776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.315 ms 00:27:01.233 [2024-11-27 12:10:51.054785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-11-27 12:10:51.088179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.233 [2024-11-27 12:10:51.088216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:01.233 [2024-11-27 12:10:51.088227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.396 ms 00:27:01.233 [2024-11-27 12:10:51.088237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-11-27 12:10:51.121459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.233 [2024-11-27 12:10:51.121496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:01.233 [2024-11-27 12:10:51.121525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.189 ms 00:27:01.233 [2024-11-27 12:10:51.121534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.233 [2024-11-27 12:10:51.121568] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:01.233 [2024-11-27 12:10:51.121584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:01.233 [2024-11-27 12:10:51.121597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:01.233 [2024-11-27 12:10:51.121964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.121976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.121986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.121997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:01.234 [2024-11-27 12:10:51.122671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:01.234 [2024-11-27 12:10:51.122680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46f1d584-88ce-4301-a84d-84f52c9539f7 00:27:01.234 [2024-11-27 12:10:51.122691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:01.234 [2024-11-27 12:10:51.122701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25536 00:27:01.234 [2024-11-27 12:10:51.122711] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24576 00:27:01.234 [2024-11-27 12:10:51.122721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0391 00:27:01.234 [2024-11-27 12:10:51.122736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:01.234 [2024-11-27 12:10:51.122756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:01.234 [2024-11-27 12:10:51.122766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:01.234 [2024-11-27 12:10:51.122775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:01.234 [2024-11-27 12:10:51.122784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:01.234 [2024-11-27 12:10:51.122793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.234 [2024-11-27 12:10:51.122803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:01.234 [2024-11-27 12:10:51.122813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:27:01.234 [2024-11-27 12:10:51.122831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.234 [2024-11-27 12:10:51.142314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.234 [2024-11-27 12:10:51.142344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:01.234 [2024-11-27 12:10:51.142380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.482 ms 00:27:01.234 [2024-11-27 12:10:51.142390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.234 [2024-11-27 12:10:51.142993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.234 [2024-11-27 12:10:51.143009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:01.234 [2024-11-27 12:10:51.143020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:27:01.234 [2024-11-27 12:10:51.143030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.234 [2024-11-27 12:10:51.191641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.234 [2024-11-27 12:10:51.191676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.234 [2024-11-27 12:10:51.191704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.234 [2024-11-27 12:10:51.191714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.234 [2024-11-27 12:10:51.191767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.234 [2024-11-27 12:10:51.191778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.234 [2024-11-27 12:10:51.191788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.234 [2024-11-27 12:10:51.191797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.235 [2024-11-27 12:10:51.191878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.235 [2024-11-27 12:10:51.191891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.235 [2024-11-27 12:10:51.191905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.235 [2024-11-27 12:10:51.191915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.235 [2024-11-27 12:10:51.191932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.235 [2024-11-27 12:10:51.191943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.235 [2024-11-27 12:10:51.191953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.235 [2024-11-27 12:10:51.191962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.309462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.309517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.493 [2024-11-27 12:10:51.309531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.309541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.403831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.403877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.493 [2024-11-27 12:10:51.403891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.403902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.404009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.404022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.493 [2024-11-27 12:10:51.404032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.404045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.404083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.404094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.493 [2024-11-27 12:10:51.404104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.404114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.404214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.404227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.493 [2024-11-27 12:10:51.404237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.404248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.493 [2024-11-27 12:10:51.404302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.493 [2024-11-27 12:10:51.404315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:01.493 [2024-11-27 12:10:51.404324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.493 [2024-11-27 12:10:51.404335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.494 [2024-11-27 12:10:51.404374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.494 [2024-11-27 12:10:51.404385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.494 [2024-11-27 12:10:51.404413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.494 [2024-11-27 12:10:51.404423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.494 [2024-11-27 12:10:51.404467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.494 [2024-11-27 12:10:51.404480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.494 [2024-11-27 12:10:51.404489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.494 [2024-11-27 12:10:51.404499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.494 [2024-11-27 12:10:51.404617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.347 ms, result 0 00:27:02.429 00:27:02.429 00:27:02.429 12:10:52 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:04.343 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:04.343 Process with pid 78986 is not found 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78986 00:27:04.343 12:10:54 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78986 ']' 00:27:04.343 12:10:54 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78986 00:27:04.343 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78986) - No such process 00:27:04.343 12:10:54 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78986 is not found' 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:04.343 Remove shared memory files 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:04.343 12:10:54 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:04.343 00:27:04.343 real 3m20.748s 00:27:04.343 user 3m8.001s 00:27:04.343 sys 0m13.878s 00:27:04.343 12:10:54 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.343 12:10:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:04.343 ************************************ 00:27:04.343 END TEST ftl_restore 00:27:04.343 ************************************ 00:27:04.343 12:10:54 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:04.343 12:10:54 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:04.343 12:10:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.343 12:10:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:04.343 ************************************ 00:27:04.343 START TEST ftl_dirty_shutdown 00:27:04.343 ************************************ 00:27:04.343 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:04.604 * Looking for test storage... 00:27:04.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:04.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.604 --rc genhtml_branch_coverage=1 00:27:04.604 --rc genhtml_function_coverage=1 00:27:04.604 --rc genhtml_legend=1 00:27:04.604 --rc geninfo_all_blocks=1 00:27:04.604 --rc geninfo_unexecuted_blocks=1 00:27:04.604 00:27:04.604 ' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:04.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.604 --rc genhtml_branch_coverage=1 00:27:04.604 --rc genhtml_function_coverage=1 00:27:04.604 --rc genhtml_legend=1 00:27:04.604 --rc geninfo_all_blocks=1 00:27:04.604 --rc geninfo_unexecuted_blocks=1 00:27:04.604 00:27:04.604 ' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:04.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.604 --rc genhtml_branch_coverage=1 00:27:04.604 --rc genhtml_function_coverage=1 00:27:04.604 --rc genhtml_legend=1 00:27:04.604 --rc geninfo_all_blocks=1 00:27:04.604 --rc geninfo_unexecuted_blocks=1 00:27:04.604 00:27:04.604 ' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:04.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.604 --rc genhtml_branch_coverage=1 00:27:04.604 --rc genhtml_function_coverage=1 00:27:04.604 --rc genhtml_legend=1 00:27:04.604 --rc geninfo_all_blocks=1 00:27:04.604 --rc geninfo_unexecuted_blocks=1 00:27:04.604 00:27:04.604 ' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81119 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81119 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81119 ']' 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.604 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.605 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.605 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.605 12:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:04.864 [2024-11-27 12:10:54.679667] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:04.864 [2024-11-27 12:10:54.679799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81119 ] 00:27:04.864 [2024-11-27 12:10:54.859915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.124 [2024-11-27 12:10:54.967696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:06.063 12:10:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:06.063 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:06.323 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.323 { 00:27:06.323 "name": "nvme0n1", 00:27:06.323 "aliases": [ 00:27:06.323 "59a9876f-0c57-47e4-a46b-90c3aaf13d56" 00:27:06.323 ], 00:27:06.323 "product_name": "NVMe disk", 00:27:06.323 "block_size": 4096, 00:27:06.323 "num_blocks": 1310720, 00:27:06.323 "uuid": "59a9876f-0c57-47e4-a46b-90c3aaf13d56", 00:27:06.323 "numa_id": -1, 00:27:06.323 "assigned_rate_limits": { 00:27:06.323 "rw_ios_per_sec": 0, 00:27:06.323 "rw_mbytes_per_sec": 0, 00:27:06.323 "r_mbytes_per_sec": 0, 00:27:06.323 "w_mbytes_per_sec": 0 00:27:06.323 }, 00:27:06.323 "claimed": true, 00:27:06.323 "claim_type": "read_many_write_one", 00:27:06.323 "zoned": false, 00:27:06.323 "supported_io_types": { 00:27:06.323 "read": true, 00:27:06.323 "write": true, 00:27:06.323 "unmap": true, 00:27:06.323 "flush": true, 00:27:06.323 "reset": true, 00:27:06.323 "nvme_admin": true, 00:27:06.323 "nvme_io": true, 00:27:06.323 "nvme_io_md": false, 00:27:06.323 "write_zeroes": true, 00:27:06.323 "zcopy": false, 00:27:06.323 "get_zone_info": false, 00:27:06.323 "zone_management": false, 00:27:06.323 "zone_append": false, 00:27:06.323 "compare": true, 00:27:06.323 "compare_and_write": false, 00:27:06.323 "abort": true, 00:27:06.323 "seek_hole": false, 00:27:06.323 "seek_data": false, 00:27:06.323 "copy": true, 00:27:06.323 "nvme_iov_md": false 00:27:06.323 }, 00:27:06.323 "driver_specific": { 00:27:06.323 "nvme": [ 00:27:06.323 { 00:27:06.323 "pci_address": "0000:00:11.0", 00:27:06.323 "trid": { 00:27:06.323 "trtype": "PCIe", 00:27:06.323 "traddr": "0000:00:11.0" 00:27:06.323 }, 00:27:06.323 "ctrlr_data": { 00:27:06.323 "cntlid": 0, 00:27:06.323 "vendor_id": "0x1b36", 00:27:06.323 "model_number": "QEMU NVMe Ctrl", 00:27:06.323 "serial_number": "12341", 00:27:06.323 "firmware_revision": "8.0.0", 00:27:06.323 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:06.323 "oacs": { 00:27:06.323 "security": 0, 00:27:06.323 "format": 1, 00:27:06.323 "firmware": 0, 00:27:06.323 "ns_manage": 1 00:27:06.323 }, 00:27:06.323 "multi_ctrlr": false, 00:27:06.323 "ana_reporting": false 00:27:06.323 }, 00:27:06.323 "vs": { 00:27:06.323 "nvme_version": "1.4" 00:27:06.323 }, 00:27:06.323 "ns_data": { 00:27:06.323 "id": 1, 00:27:06.324 "can_share": false 00:27:06.324 } 00:27:06.324 } 00:27:06.324 ], 00:27:06.324 "mp_policy": "active_passive" 00:27:06.324 } 00:27:06.324 } 00:27:06.324 ]' 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:06.324 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:06.583 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=84039ef2-2895-482c-a3db-17dd62429e84 00:27:06.583 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:06.583 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84039ef2-2895-482c-a3db-17dd62429e84 00:27:06.842 12:10:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:07.102 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0989c05b-2f40-461a-9537-7bc5d8471547 00:27:07.102 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0989c05b-2f40-461a-9537-7bc5d8471547 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:07.362 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.622 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:07.622 { 00:27:07.622 "name": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:07.622 "aliases": [ 00:27:07.622 "lvs/nvme0n1p0" 00:27:07.622 ], 00:27:07.622 "product_name": "Logical Volume", 00:27:07.622 "block_size": 4096, 00:27:07.622 "num_blocks": 26476544, 00:27:07.622 "uuid": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:07.622 "assigned_rate_limits": { 00:27:07.622 "rw_ios_per_sec": 0, 00:27:07.622 "rw_mbytes_per_sec": 0, 00:27:07.622 "r_mbytes_per_sec": 0, 00:27:07.622 "w_mbytes_per_sec": 0 00:27:07.622 }, 00:27:07.622 "claimed": false, 00:27:07.622 "zoned": false, 00:27:07.622 "supported_io_types": { 00:27:07.622 "read": true, 00:27:07.622 "write": true, 00:27:07.622 "unmap": true, 00:27:07.622 "flush": false, 00:27:07.622 "reset": true, 00:27:07.623 "nvme_admin": false, 00:27:07.623 "nvme_io": false, 00:27:07.623 "nvme_io_md": false, 00:27:07.623 "write_zeroes": true, 00:27:07.623 "zcopy": false, 00:27:07.623 "get_zone_info": false, 00:27:07.623 "zone_management": false, 00:27:07.623 "zone_append": false, 00:27:07.623 "compare": false, 00:27:07.623 "compare_and_write": false, 00:27:07.623 "abort": false, 00:27:07.623 "seek_hole": true, 00:27:07.623 "seek_data": true, 00:27:07.623 "copy": false, 00:27:07.623 "nvme_iov_md": false 00:27:07.623 }, 00:27:07.623 "driver_specific": { 00:27:07.623 "lvol": { 00:27:07.623 "lvol_store_uuid": "0989c05b-2f40-461a-9537-7bc5d8471547", 00:27:07.623 "base_bdev": "nvme0n1", 00:27:07.623 "thin_provision": true, 00:27:07.623 "num_allocated_clusters": 0, 00:27:07.623 "snapshot": false, 00:27:07.623 "clone": false, 00:27:07.623 "esnap_clone": false 00:27:07.623 } 00:27:07.623 } 00:27:07.623 } 00:27:07.623 ]' 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:07.623 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:07.882 12:10:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:08.142 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:08.142 { 00:27:08.142 "name": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:08.142 "aliases": [ 00:27:08.142 "lvs/nvme0n1p0" 00:27:08.142 ], 00:27:08.142 "product_name": "Logical Volume", 00:27:08.142 "block_size": 4096, 00:27:08.142 "num_blocks": 26476544, 00:27:08.142 "uuid": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:08.142 "assigned_rate_limits": { 00:27:08.142 "rw_ios_per_sec": 0, 00:27:08.142 "rw_mbytes_per_sec": 0, 00:27:08.142 "r_mbytes_per_sec": 0, 00:27:08.142 "w_mbytes_per_sec": 0 00:27:08.142 }, 00:27:08.142 "claimed": false, 00:27:08.142 "zoned": false, 00:27:08.142 "supported_io_types": { 00:27:08.142 "read": true, 00:27:08.142 "write": true, 00:27:08.142 "unmap": true, 00:27:08.142 "flush": false, 00:27:08.142 "reset": true, 00:27:08.142 "nvme_admin": false, 00:27:08.142 "nvme_io": false, 00:27:08.142 "nvme_io_md": false, 00:27:08.142 "write_zeroes": true, 00:27:08.142 "zcopy": false, 00:27:08.142 "get_zone_info": false, 00:27:08.142 "zone_management": false, 00:27:08.142 "zone_append": false, 00:27:08.142 "compare": false, 00:27:08.142 "compare_and_write": false, 00:27:08.142 "abort": false, 00:27:08.142 "seek_hole": true, 00:27:08.142 "seek_data": true, 00:27:08.142 "copy": false, 00:27:08.142 "nvme_iov_md": false 00:27:08.142 }, 00:27:08.142 "driver_specific": { 00:27:08.142 "lvol": { 00:27:08.142 "lvol_store_uuid": "0989c05b-2f40-461a-9537-7bc5d8471547", 00:27:08.142 "base_bdev": "nvme0n1", 00:27:08.142 "thin_provision": true, 00:27:08.142 "num_allocated_clusters": 0, 00:27:08.142 "snapshot": false, 00:27:08.142 "clone": false, 00:27:08.142 "esnap_clone": false 00:27:08.142 } 00:27:08.142 } 00:27:08.142 } 00:27:08.143 ]' 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:08.143 12:10:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:08.402 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:08.663 { 00:27:08.663 "name": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:08.663 "aliases": [ 00:27:08.663 "lvs/nvme0n1p0" 00:27:08.663 ], 00:27:08.663 "product_name": "Logical Volume", 00:27:08.663 "block_size": 4096, 00:27:08.663 "num_blocks": 26476544, 00:27:08.663 "uuid": "bbd3a7ad-3a4f-4977-a3ad-049e4aa42658", 00:27:08.663 "assigned_rate_limits": { 00:27:08.663 "rw_ios_per_sec": 0, 00:27:08.663 "rw_mbytes_per_sec": 0, 00:27:08.663 "r_mbytes_per_sec": 0, 00:27:08.663 "w_mbytes_per_sec": 0 00:27:08.663 }, 00:27:08.663 "claimed": false, 00:27:08.663 "zoned": false, 00:27:08.663 "supported_io_types": { 00:27:08.663 "read": true, 00:27:08.663 "write": true, 00:27:08.663 "unmap": true, 00:27:08.663 "flush": false, 00:27:08.663 "reset": true, 00:27:08.663 "nvme_admin": false, 00:27:08.663 "nvme_io": false, 00:27:08.663 "nvme_io_md": false, 00:27:08.663 "write_zeroes": true, 00:27:08.663 "zcopy": false, 00:27:08.663 "get_zone_info": false, 00:27:08.663 "zone_management": false, 00:27:08.663 "zone_append": false, 00:27:08.663 "compare": false, 00:27:08.663 "compare_and_write": false, 00:27:08.663 "abort": false, 00:27:08.663 "seek_hole": true, 00:27:08.663 "seek_data": true, 00:27:08.663 "copy": false, 00:27:08.663 "nvme_iov_md": false 00:27:08.663 }, 00:27:08.663 "driver_specific": { 00:27:08.663 "lvol": { 00:27:08.663 "lvol_store_uuid": "0989c05b-2f40-461a-9537-7bc5d8471547", 00:27:08.663 "base_bdev": "nvme0n1", 00:27:08.663 "thin_provision": true, 00:27:08.663 "num_allocated_clusters": 0, 00:27:08.663 "snapshot": false, 00:27:08.663 "clone": false, 00:27:08.663 "esnap_clone": false 00:27:08.663 } 00:27:08.663 } 00:27:08.663 } 00:27:08.663 ]' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 --l2p_dram_limit 10' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:08.663 12:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bbd3a7ad-3a4f-4977-a3ad-049e4aa42658 --l2p_dram_limit 10 -c nvc0n1p0 00:27:08.924 [2024-11-27 12:10:58.769954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.924 [2024-11-27 12:10:58.769998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:08.924 [2024-11-27 12:10:58.770017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.924 [2024-11-27 12:10:58.770027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.924 [2024-11-27 12:10:58.770092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.924 [2024-11-27 12:10:58.770105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.924 [2024-11-27 12:10:58.770117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:08.924 [2024-11-27 12:10:58.770127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.924 [2024-11-27 12:10:58.770149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:08.924 [2024-11-27 12:10:58.771137] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:08.924 [2024-11-27 12:10:58.771165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.924 [2024-11-27 12:10:58.771176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.924 [2024-11-27 12:10:58.771190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:27:08.924 [2024-11-27 12:10:58.771200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.771279] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36b3d101-e517-4a51-840e-d1a112a8f9ea 00:27:08.925 [2024-11-27 12:10:58.772760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.772785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:08.925 [2024-11-27 12:10:58.772797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:08.925 [2024-11-27 12:10:58.772809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.780409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.780439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.925 [2024-11-27 12:10:58.780451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.575 ms 00:27:08.925 [2024-11-27 12:10:58.780463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.780556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.780572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.925 [2024-11-27 12:10:58.780583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:08.925 [2024-11-27 12:10:58.780599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.780665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.780683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:08.925 [2024-11-27 12:10:58.780693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:08.925 [2024-11-27 12:10:58.780704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.780728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:08.925 [2024-11-27 12:10:58.785517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.785544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.925 [2024-11-27 12:10:58.785575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:27:08.925 [2024-11-27 12:10:58.785585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.785623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.785634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:08.925 [2024-11-27 12:10:58.785646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:08.925 [2024-11-27 12:10:58.785657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.785701] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:08.925 [2024-11-27 12:10:58.785828] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:08.925 [2024-11-27 12:10:58.785848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:08.925 [2024-11-27 12:10:58.785861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:08.925 [2024-11-27 12:10:58.785876] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:08.925 [2024-11-27 12:10:58.785888] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:08.925 [2024-11-27 12:10:58.785905] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:08.925 [2024-11-27 12:10:58.785915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:08.925 [2024-11-27 12:10:58.785928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:08.925 [2024-11-27 12:10:58.785938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:08.925 [2024-11-27 12:10:58.785951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.785971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:08.925 [2024-11-27 12:10:58.785985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:27:08.925 [2024-11-27 12:10:58.785995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.786070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.925 [2024-11-27 12:10:58.786081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:08.925 [2024-11-27 12:10:58.786093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:08.925 [2024-11-27 12:10:58.786106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.925 [2024-11-27 12:10:58.786197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:08.925 [2024-11-27 12:10:58.786210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:08.925 [2024-11-27 12:10:58.786223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:08.925 [2024-11-27 12:10:58.786255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:08.925 [2024-11-27 12:10:58.786287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.925 [2024-11-27 12:10:58.786308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:08.925 [2024-11-27 12:10:58.786318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:08.925 [2024-11-27 12:10:58.786329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.925 [2024-11-27 12:10:58.786339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:08.925 [2024-11-27 12:10:58.786351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:08.925 [2024-11-27 12:10:58.786377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:08.925 [2024-11-27 12:10:58.786400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:08.925 [2024-11-27 12:10:58.786435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:08.925 [2024-11-27 12:10:58.786465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:08.925 [2024-11-27 12:10:58.786497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:08.925 [2024-11-27 12:10:58.786527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.925 [2024-11-27 12:10:58.786546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:08.925 [2024-11-27 12:10:58.786560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:08.925 [2024-11-27 12:10:58.786569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.925 [2024-11-27 12:10:58.786580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:08.925 [2024-11-27 12:10:58.786589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:08.925 [2024-11-27 12:10:58.786600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.925 [2024-11-27 12:10:58.786609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:08.926 [2024-11-27 12:10:58.786621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:08.926 [2024-11-27 12:10:58.786629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.926 [2024-11-27 12:10:58.786641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:08.926 [2024-11-27 12:10:58.786649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:08.926 [2024-11-27 12:10:58.786660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.926 [2024-11-27 12:10:58.786669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:08.926 [2024-11-27 12:10:58.786682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:08.926 [2024-11-27 12:10:58.786693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.926 [2024-11-27 12:10:58.786707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.926 [2024-11-27 12:10:58.786721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:08.926 [2024-11-27 12:10:58.786735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:08.926 [2024-11-27 12:10:58.786744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:08.926 [2024-11-27 12:10:58.786756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:08.926 [2024-11-27 12:10:58.786765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:08.926 [2024-11-27 12:10:58.786776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:08.926 [2024-11-27 12:10:58.786790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:08.926 [2024-11-27 12:10:58.786805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.786816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:08.926 [2024-11-27 12:10:58.786829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:08.926 [2024-11-27 12:10:58.786839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:08.926 [2024-11-27 12:10:58.786852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:08.926 [2024-11-27 12:10:58.786862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:08.926 [2024-11-27 12:10:58.786875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:08.926 [2024-11-27 12:10:58.786885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:08.926 [2024-11-27 12:10:58.786898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:08.926 [2024-11-27 12:10:58.786908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:08.926 [2024-11-27 12:10:58.786924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.786933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.786946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.786956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.786970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:08.926 [2024-11-27 12:10:58.786980] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:08.926 [2024-11-27 12:10:58.786994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.787004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:08.926 [2024-11-27 12:10:58.787017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:08.926 [2024-11-27 12:10:58.787027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:08.926 [2024-11-27 12:10:58.787040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:08.926 [2024-11-27 12:10:58.787051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.926 [2024-11-27 12:10:58.787063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:08.926 [2024-11-27 12:10:58.787073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:27:08.926 [2024-11-27 12:10:58.787085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.926 [2024-11-27 12:10:58.787125] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:08.926 [2024-11-27 12:10:58.787142] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:12.222 [2024-11-27 12:11:02.232085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.222 [2024-11-27 12:11:02.232157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:12.222 [2024-11-27 12:11:02.232179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3450.549 ms 00:27:12.222 [2024-11-27 12:11:02.232196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.280986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.281051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.483 [2024-11-27 12:11:02.281073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.535 ms 00:27:12.483 [2024-11-27 12:11:02.281090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.281261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.281282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:12.483 [2024-11-27 12:11:02.281300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:12.483 [2024-11-27 12:11:02.281321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.334292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.334346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.483 [2024-11-27 12:11:02.334372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.970 ms 00:27:12.483 [2024-11-27 12:11:02.334388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.334445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.334462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.483 [2024-11-27 12:11:02.334476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:12.483 [2024-11-27 12:11:02.334506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.335345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.335381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.483 [2024-11-27 12:11:02.335396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:27:12.483 [2024-11-27 12:11:02.335412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.335527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.335544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.483 [2024-11-27 12:11:02.335557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:12.483 [2024-11-27 12:11:02.335576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.358863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.358910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.483 [2024-11-27 12:11:02.358927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.299 ms 00:27:12.483 [2024-11-27 12:11:02.358943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.390309] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:12.483 [2024-11-27 12:11:02.396040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.396082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:12.483 [2024-11-27 12:11:02.396108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.039 ms 00:27:12.483 [2024-11-27 12:11:02.396124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.485582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.485625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:12.483 [2024-11-27 12:11:02.485647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.545 ms 00:27:12.483 [2024-11-27 12:11:02.485660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.485878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.485894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:12.483 [2024-11-27 12:11:02.485914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:27:12.483 [2024-11-27 12:11:02.485926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-27 12:11:02.520517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-27 12:11:02.520566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:12.483 [2024-11-27 12:11:02.520589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.580 ms 00:27:12.483 [2024-11-27 12:11:02.520605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.554509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.554547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:12.743 [2024-11-27 12:11:02.554567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.905 ms 00:27:12.743 [2024-11-27 12:11:02.554579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.555310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.555329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:12.743 [2024-11-27 12:11:02.555350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:27:12.743 [2024-11-27 12:11:02.555377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.653119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.653159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:12.743 [2024-11-27 12:11:02.653183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.832 ms 00:27:12.743 [2024-11-27 12:11:02.653196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.690242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.690281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:12.743 [2024-11-27 12:11:02.690301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.006 ms 00:27:12.743 [2024-11-27 12:11:02.690313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.724297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.724345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:12.743 [2024-11-27 12:11:02.724374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.975 ms 00:27:12.743 [2024-11-27 12:11:02.724385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.759664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.759703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:12.743 [2024-11-27 12:11:02.759723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.287 ms 00:27:12.743 [2024-11-27 12:11:02.759735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.759792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.759806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:12.743 [2024-11-27 12:11:02.759827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:12.743 [2024-11-27 12:11:02.759838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.759962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-27 12:11:02.759977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:12.743 [2024-11-27 12:11:02.759993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:12.743 [2024-11-27 12:11:02.760005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-27 12:11:02.761509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3997.460 ms, result 0 00:27:12.743 { 00:27:12.743 "name": "ftl0", 00:27:12.743 "uuid": "36b3d101-e517-4a51-840e-d1a112a8f9ea" 00:27:12.743 } 00:27:13.003 12:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:13.003 12:11:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:13.003 12:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:13.003 12:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:13.003 12:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:13.263 /dev/nbd0 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:13.263 1+0 records in 00:27:13.263 1+0 records out 00:27:13.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040527 s, 10.1 MB/s 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:13.263 12:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:13.523 [2024-11-27 12:11:03.340671] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:13.523 [2024-11-27 12:11:03.340785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81269 ] 00:27:13.523 [2024-11-27 12:11:03.525315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.783 [2024-11-27 12:11:03.657271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.190  [2024-11-27T12:11:06.242Z] Copying: 204/1024 [MB] (204 MBps) [2024-11-27T12:11:07.177Z] Copying: 413/1024 [MB] (208 MBps) [2024-11-27T12:11:08.113Z] Copying: 622/1024 [MB] (209 MBps) [2024-11-27T12:11:09.049Z] Copying: 828/1024 [MB] (206 MBps) [2024-11-27T12:11:10.425Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:27:20.372 00:27:20.372 12:11:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:22.280 12:11:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:22.280 [2024-11-27 12:11:11.923471] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:22.280 [2024-11-27 12:11:11.923595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81358 ] 00:27:22.280 [2024-11-27 12:11:12.100211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.280 [2024-11-27 12:11:12.231887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.659  [2024-11-27T12:11:14.651Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-27T12:11:16.031Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-27T12:11:16.601Z] Copying: 50/1024 [MB] (16 MBps) [2024-11-27T12:11:17.981Z] Copying: 67/1024 [MB] (17 MBps) [2024-11-27T12:11:18.919Z] Copying: 85/1024 [MB] (17 MBps) [2024-11-27T12:11:19.855Z] Copying: 102/1024 [MB] (17 MBps) [2024-11-27T12:11:20.793Z] Copying: 120/1024 [MB] (17 MBps) [2024-11-27T12:11:21.731Z] Copying: 137/1024 [MB] (17 MBps) [2024-11-27T12:11:22.668Z] Copying: 155/1024 [MB] (17 MBps) [2024-11-27T12:11:23.614Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-27T12:11:24.992Z] Copying: 190/1024 [MB] (17 MBps) [2024-11-27T12:11:25.929Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-27T12:11:26.865Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-27T12:11:27.802Z] Copying: 242/1024 [MB] (17 MBps) [2024-11-27T12:11:28.742Z] Copying: 260/1024 [MB] (17 MBps) [2024-11-27T12:11:29.679Z] Copying: 277/1024 [MB] (17 MBps) [2024-11-27T12:11:30.616Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-27T12:11:31.994Z] Copying: 312/1024 [MB] (17 MBps) [2024-11-27T12:11:32.932Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-27T12:11:33.901Z] Copying: 346/1024 [MB] (17 MBps) [2024-11-27T12:11:34.838Z] Copying: 364/1024 [MB] (17 MBps) [2024-11-27T12:11:35.772Z] Copying: 381/1024 [MB] (17 MBps) [2024-11-27T12:11:36.708Z] Copying: 398/1024 [MB] (17 MBps) [2024-11-27T12:11:37.644Z] Copying: 416/1024 [MB] (17 MBps) [2024-11-27T12:11:38.579Z] Copying: 433/1024 [MB] (17 MBps) [2024-11-27T12:11:39.955Z] Copying: 450/1024 [MB] (17 MBps) [2024-11-27T12:11:40.892Z] Copying: 467/1024 [MB] (17 MBps) [2024-11-27T12:11:41.829Z] Copying: 484/1024 [MB] (17 MBps) [2024-11-27T12:11:42.768Z] Copying: 501/1024 [MB] (17 MBps) [2024-11-27T12:11:43.706Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-27T12:11:44.643Z] Copying: 535/1024 [MB] (17 MBps) [2024-11-27T12:11:45.582Z] Copying: 553/1024 [MB] (17 MBps) [2024-11-27T12:11:46.961Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-27T12:11:47.899Z] Copying: 587/1024 [MB] (16 MBps) [2024-11-27T12:11:48.837Z] Copying: 604/1024 [MB] (17 MBps) [2024-11-27T12:11:49.772Z] Copying: 621/1024 [MB] (16 MBps) [2024-11-27T12:11:50.710Z] Copying: 638/1024 [MB] (17 MBps) [2024-11-27T12:11:51.648Z] Copying: 655/1024 [MB] (17 MBps) [2024-11-27T12:11:52.586Z] Copying: 672/1024 [MB] (16 MBps) [2024-11-27T12:11:53.965Z] Copying: 689/1024 [MB] (16 MBps) [2024-11-27T12:11:54.545Z] Copying: 706/1024 [MB] (17 MBps) [2024-11-27T12:11:55.928Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-27T12:11:56.866Z] Copying: 740/1024 [MB] (17 MBps) [2024-11-27T12:11:57.803Z] Copying: 757/1024 [MB] (17 MBps) [2024-11-27T12:11:58.738Z] Copying: 774/1024 [MB] (16 MBps) [2024-11-27T12:11:59.671Z] Copying: 791/1024 [MB] (17 MBps) [2024-11-27T12:12:00.603Z] Copying: 808/1024 [MB] (17 MBps) [2024-11-27T12:12:01.539Z] Copying: 825/1024 [MB] (17 MBps) [2024-11-27T12:12:02.979Z] Copying: 842/1024 [MB] (17 MBps) [2024-11-27T12:12:03.569Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-27T12:12:04.946Z] Copying: 877/1024 [MB] (17 MBps) [2024-11-27T12:12:05.884Z] Copying: 894/1024 [MB] (16 MBps) [2024-11-27T12:12:06.821Z] Copying: 911/1024 [MB] (16 MBps) [2024-11-27T12:12:07.759Z] Copying: 928/1024 [MB] (17 MBps) [2024-11-27T12:12:08.694Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-27T12:12:09.630Z] Copying: 962/1024 [MB] (16 MBps) [2024-11-27T12:12:10.567Z] Copying: 979/1024 [MB] (17 MBps) [2024-11-27T12:12:11.945Z] Copying: 996/1024 [MB] (17 MBps) [2024-11-27T12:12:12.203Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-27T12:12:13.613Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:28:23.560 00:28:23.560 12:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:23.561 12:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:23.561 12:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:23.821 [2024-11-27 12:12:13.668652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.668707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.821 [2024-11-27 12:12:13.668741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:23.821 [2024-11-27 12:12:13.668758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.668783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.821 [2024-11-27 12:12:13.672876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.672912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.821 [2024-11-27 12:12:13.672944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:28:23.821 [2024-11-27 12:12:13.672955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.675120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.675161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.821 [2024-11-27 12:12:13.675178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:28:23.821 [2024-11-27 12:12:13.675193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.693170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.693210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.821 [2024-11-27 12:12:13.693233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.978 ms 00:28:23.821 [2024-11-27 12:12:13.693245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.698209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.698244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.821 [2024-11-27 12:12:13.698275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:28:23.821 [2024-11-27 12:12:13.698286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.735448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.735488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.821 [2024-11-27 12:12:13.735505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.137 ms 00:28:23.821 [2024-11-27 12:12:13.735516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.757362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.757418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.821 [2024-11-27 12:12:13.757452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.822 ms 00:28:23.821 [2024-11-27 12:12:13.757462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.757616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.757630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.821 [2024-11-27 12:12:13.757645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:23.821 [2024-11-27 12:12:13.757655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.793511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.793550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:23.821 [2024-11-27 12:12:13.793566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.887 ms 00:28:23.821 [2024-11-27 12:12:13.793576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.829002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.829038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:23.821 [2024-11-27 12:12:13.829054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.437 ms 00:28:23.821 [2024-11-27 12:12:13.829064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.821 [2024-11-27 12:12:13.863581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.821 [2024-11-27 12:12:13.863626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.821 [2024-11-27 12:12:13.863660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.507 ms 00:28:23.821 [2024-11-27 12:12:13.863670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.082 [2024-11-27 12:12:13.898952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.082 [2024-11-27 12:12:13.898987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:24.082 [2024-11-27 12:12:13.899003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.239 ms 00:28:24.082 [2024-11-27 12:12:13.899013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.082 [2024-11-27 12:12:13.899056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:24.082 [2024-11-27 12:12:13.899073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.082 [2024-11-27 12:12:13.899224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.899989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:24.083 [2024-11-27 12:12:13.900342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:24.083 [2024-11-27 12:12:13.900354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36b3d101-e517-4a51-840e-d1a112a8f9ea 00:28:24.084 [2024-11-27 12:12:13.900374] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:24.084 [2024-11-27 12:12:13.900392] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:24.084 [2024-11-27 12:12:13.900401] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:24.084 [2024-11-27 12:12:13.900414] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:24.084 [2024-11-27 12:12:13.900424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:24.084 [2024-11-27 12:12:13.900436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:24.084 [2024-11-27 12:12:13.900446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:24.084 [2024-11-27 12:12:13.900458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:24.084 [2024-11-27 12:12:13.900467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:24.084 [2024-11-27 12:12:13.900479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.084 [2024-11-27 12:12:13.900489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:24.084 [2024-11-27 12:12:13.900502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:28:24.084 [2024-11-27 12:12:13.900512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.920301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.084 [2024-11-27 12:12:13.920336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:24.084 [2024-11-27 12:12:13.920351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.765 ms 00:28:24.084 [2024-11-27 12:12:13.920377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.920978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.084 [2024-11-27 12:12:13.920995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:24.084 [2024-11-27 12:12:13.921009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:28:24.084 [2024-11-27 12:12:13.921019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.984985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.084 [2024-11-27 12:12:13.985020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.084 [2024-11-27 12:12:13.985053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.084 [2024-11-27 12:12:13.985063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.985122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.084 [2024-11-27 12:12:13.985133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.084 [2024-11-27 12:12:13.985146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.084 [2024-11-27 12:12:13.985158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.985260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.084 [2024-11-27 12:12:13.985273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.084 [2024-11-27 12:12:13.985286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.084 [2024-11-27 12:12:13.985295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:13.985320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.084 [2024-11-27 12:12:13.985330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.084 [2024-11-27 12:12:13.985342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.084 [2024-11-27 12:12:13.985352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.084 [2024-11-27 12:12:14.103018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.084 [2024-11-27 12:12:14.103067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.084 [2024-11-27 12:12:14.103085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.084 [2024-11-27 12:12:14.103094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.199786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.199835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.344 [2024-11-27 12:12:14.199853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.199863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.199990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.344 [2024-11-27 12:12:14.200020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.344 [2024-11-27 12:12:14.200109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.344 [2024-11-27 12:12:14.200288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:24.344 [2024-11-27 12:12:14.200369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.344 [2024-11-27 12:12:14.200467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.344 [2024-11-27 12:12:14.200539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.344 [2024-11-27 12:12:14.200552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.344 [2024-11-27 12:12:14.200561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.344 [2024-11-27 12:12:14.200699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.872 ms, result 0 00:28:24.344 true 00:28:24.344 12:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81119 00:28:24.344 12:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81119 00:28:24.344 12:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:24.344 [2024-11-27 12:12:14.332113] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:24.344 [2024-11-27 12:12:14.332239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81990 ] 00:28:24.603 [2024-11-27 12:12:14.512020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.603 [2024-11-27 12:12:14.624059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.982  [2024-11-27T12:12:16.973Z] Copying: 209/1024 [MB] (209 MBps) [2024-11-27T12:12:18.352Z] Copying: 422/1024 [MB] (212 MBps) [2024-11-27T12:12:19.290Z] Copying: 635/1024 [MB] (213 MBps) [2024-11-27T12:12:19.859Z] Copying: 844/1024 [MB] (208 MBps) [2024-11-27T12:12:21.238Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:28:31.186 00:28:31.186 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81119 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:31.186 12:12:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:31.186 [2024-11-27 12:12:21.003273] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:31.186 [2024-11-27 12:12:21.003413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82060 ] 00:28:31.186 [2024-11-27 12:12:21.180527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.445 [2024-11-27 12:12:21.283435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.704 [2024-11-27 12:12:21.637671] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.704 [2024-11-27 12:12:21.637758] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.704 [2024-11-27 12:12:21.703771] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:31.704 [2024-11-27 12:12:21.704073] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:31.704 [2024-11-27 12:12:21.704443] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:31.963 [2024-11-27 12:12:22.011854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.963 [2024-11-27 12:12:22.011902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.963 [2024-11-27 12:12:22.011917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:31.963 [2024-11-27 12:12:22.011932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.963 [2024-11-27 12:12:22.011978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.963 [2024-11-27 12:12:22.011991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.963 [2024-11-27 12:12:22.012001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:31.963 [2024-11-27 12:12:22.012011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.963 [2024-11-27 12:12:22.012032] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.963 [2024-11-27 12:12:22.013009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.963 [2024-11-27 12:12:22.013039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.963 [2024-11-27 12:12:22.013050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.963 [2024-11-27 12:12:22.013061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:28:31.963 [2024-11-27 12:12:22.013071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.963 [2024-11-27 12:12:22.014534] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:32.224 [2024-11-27 12:12:22.032780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.032818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:32.224 [2024-11-27 12:12:22.032848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.276 ms 00:28:32.224 [2024-11-27 12:12:22.032858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.032925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.032939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:32.224 [2024-11-27 12:12:22.032950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:32.224 [2024-11-27 12:12:22.032959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.039855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.039883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.224 [2024-11-27 12:12:22.039894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:28:32.224 [2024-11-27 12:12:22.039919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.039999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.040012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.224 [2024-11-27 12:12:22.040023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:32.224 [2024-11-27 12:12:22.040033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.040075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.040087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:32.224 [2024-11-27 12:12:22.040098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.224 [2024-11-27 12:12:22.040108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.040131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.224 [2024-11-27 12:12:22.044887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.044919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.224 [2024-11-27 12:12:22.044931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:28:32.224 [2024-11-27 12:12:22.044941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.044988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.044999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:32.224 [2024-11-27 12:12:22.045009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:32.224 [2024-11-27 12:12:22.045018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.045074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:32.224 [2024-11-27 12:12:22.045097] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:32.224 [2024-11-27 12:12:22.045131] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:32.224 [2024-11-27 12:12:22.045149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:32.224 [2024-11-27 12:12:22.045253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:32.224 [2024-11-27 12:12:22.045266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:32.224 [2024-11-27 12:12:22.045279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:32.224 [2024-11-27 12:12:22.045296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:32.224 [2024-11-27 12:12:22.045307] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:32.224 [2024-11-27 12:12:22.045318] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:32.224 [2024-11-27 12:12:22.045328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:32.224 [2024-11-27 12:12:22.045338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:32.224 [2024-11-27 12:12:22.045348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:32.224 [2024-11-27 12:12:22.045358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.045367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:32.224 [2024-11-27 12:12:22.045378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:32.224 [2024-11-27 12:12:22.045398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.045468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.224 [2024-11-27 12:12:22.045483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:32.224 [2024-11-27 12:12:22.045493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:32.224 [2024-11-27 12:12:22.045503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.224 [2024-11-27 12:12:22.045598] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:32.224 [2024-11-27 12:12:22.045613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:32.224 [2024-11-27 12:12:22.045623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.224 [2024-11-27 12:12:22.045633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.224 [2024-11-27 12:12:22.045644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:32.224 [2024-11-27 12:12:22.045654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:32.224 [2024-11-27 12:12:22.045664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:32.224 [2024-11-27 12:12:22.045673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:32.224 [2024-11-27 12:12:22.045682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:32.224 [2024-11-27 12:12:22.045710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.224 [2024-11-27 12:12:22.045720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:32.225 [2024-11-27 12:12:22.045729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:32.225 [2024-11-27 12:12:22.045738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.225 [2024-11-27 12:12:22.045747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:32.225 [2024-11-27 12:12:22.045757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:32.225 [2024-11-27 12:12:22.045766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:32.225 [2024-11-27 12:12:22.045784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:32.225 [2024-11-27 12:12:22.045792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:32.225 [2024-11-27 12:12:22.045812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.225 [2024-11-27 12:12:22.045829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:32.225 [2024-11-27 12:12:22.045838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.225 [2024-11-27 12:12:22.045857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:32.225 [2024-11-27 12:12:22.045866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.225 [2024-11-27 12:12:22.045884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:32.225 [2024-11-27 12:12:22.045893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.225 [2024-11-27 12:12:22.045910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:32.225 [2024-11-27 12:12:22.045919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.225 [2024-11-27 12:12:22.045937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:32.225 [2024-11-27 12:12:22.045946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:32.225 [2024-11-27 12:12:22.045955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.225 [2024-11-27 12:12:22.045965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:32.225 [2024-11-27 12:12:22.045974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:32.225 [2024-11-27 12:12:22.045982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.225 [2024-11-27 12:12:22.045991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:32.225 [2024-11-27 12:12:22.046000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:32.225 [2024-11-27 12:12:22.046009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.225 [2024-11-27 12:12:22.046018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:32.225 [2024-11-27 12:12:22.046027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:32.225 [2024-11-27 12:12:22.046041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.225 [2024-11-27 12:12:22.046050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.225 [2024-11-27 12:12:22.046060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:32.225 [2024-11-27 12:12:22.046069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:32.225 [2024-11-27 12:12:22.046078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:32.225 [2024-11-27 12:12:22.046088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:32.225 [2024-11-27 12:12:22.046096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:32.225 [2024-11-27 12:12:22.046105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:32.225 [2024-11-27 12:12:22.046116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:32.225 [2024-11-27 12:12:22.046128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:32.225 [2024-11-27 12:12:22.046150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:32.225 [2024-11-27 12:12:22.046162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:32.225 [2024-11-27 12:12:22.046172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:32.225 [2024-11-27 12:12:22.046183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:32.225 [2024-11-27 12:12:22.046192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:32.225 [2024-11-27 12:12:22.046202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:32.225 [2024-11-27 12:12:22.046212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:32.225 [2024-11-27 12:12:22.046223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:32.225 [2024-11-27 12:12:22.046233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:32.225 [2024-11-27 12:12:22.046284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:32.225 [2024-11-27 12:12:22.046295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.225 [2024-11-27 12:12:22.046316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:32.225 [2024-11-27 12:12:22.046327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:32.225 [2024-11-27 12:12:22.046336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:32.225 [2024-11-27 12:12:22.046347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.046366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:32.225 [2024-11-27 12:12:22.046377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:28:32.225 [2024-11-27 12:12:22.046387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.083690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.083725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.225 [2024-11-27 12:12:22.083739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.314 ms 00:28:32.225 [2024-11-27 12:12:22.083750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.083832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.083844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:32.225 [2024-11-27 12:12:22.083855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:32.225 [2024-11-27 12:12:22.083865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.139407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.139454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.225 [2024-11-27 12:12:22.139489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.573 ms 00:28:32.225 [2024-11-27 12:12:22.139499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.139550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.139562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.225 [2024-11-27 12:12:22.139572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:32.225 [2024-11-27 12:12:22.139583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.140107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.140129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.225 [2024-11-27 12:12:22.140141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:28:32.225 [2024-11-27 12:12:22.140158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.225 [2024-11-27 12:12:22.140280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.225 [2024-11-27 12:12:22.140299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.225 [2024-11-27 12:12:22.140310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:32.226 [2024-11-27 12:12:22.140320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.159115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.159150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.226 [2024-11-27 12:12:22.159179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:28:32.226 [2024-11-27 12:12:22.159190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.178487] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:32.226 [2024-11-27 12:12:22.178528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:32.226 [2024-11-27 12:12:22.178544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.178556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:32.226 [2024-11-27 12:12:22.178569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.270 ms 00:28:32.226 [2024-11-27 12:12:22.178579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.208662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.208702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:32.226 [2024-11-27 12:12:22.208733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.081 ms 00:28:32.226 [2024-11-27 12:12:22.208745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.227484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.227527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:32.226 [2024-11-27 12:12:22.227542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.717 ms 00:28:32.226 [2024-11-27 12:12:22.227552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.245820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.245859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:32.226 [2024-11-27 12:12:22.245873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.252 ms 00:28:32.226 [2024-11-27 12:12:22.245884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.226 [2024-11-27 12:12:22.246670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.226 [2024-11-27 12:12:22.246702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:32.226 [2024-11-27 12:12:22.246715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:28:32.226 [2024-11-27 12:12:22.246725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.333135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.333194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:32.485 [2024-11-27 12:12:22.333227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.527 ms 00:28:32.485 [2024-11-27 12:12:22.333238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.344518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:32.485 [2024-11-27 12:12:22.347765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.347798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:32.485 [2024-11-27 12:12:22.347812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:28:32.485 [2024-11-27 12:12:22.347828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.347935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.347948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:32.485 [2024-11-27 12:12:22.347960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.485 [2024-11-27 12:12:22.347970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.348066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.348079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:32.485 [2024-11-27 12:12:22.348090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:32.485 [2024-11-27 12:12:22.348100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.348128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.348139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:32.485 [2024-11-27 12:12:22.348150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.485 [2024-11-27 12:12:22.348160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.348192] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:32.485 [2024-11-27 12:12:22.348204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.348214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:32.485 [2024-11-27 12:12:22.348224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:32.485 [2024-11-27 12:12:22.348238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.385865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.385915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:32.485 [2024-11-27 12:12:22.385930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.663 ms 00:28:32.485 [2024-11-27 12:12:22.385941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.386032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.485 [2024-11-27 12:12:22.386046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:32.485 [2024-11-27 12:12:22.386057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:32.485 [2024-11-27 12:12:22.386067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.485 [2024-11-27 12:12:22.387266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.565 ms, result 0 00:28:33.422  [2024-11-27T12:12:24.412Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T12:12:25.789Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-27T12:12:26.726Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-27T12:12:27.662Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-27T12:12:28.598Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-27T12:12:29.534Z] Copying: 145/1024 [MB] (24 MBps) [2024-11-27T12:12:30.496Z] Copying: 168/1024 [MB] (23 MBps) [2024-11-27T12:12:31.434Z] Copying: 193/1024 [MB] (24 MBps) [2024-11-27T12:12:32.813Z] Copying: 217/1024 [MB] (24 MBps) [2024-11-27T12:12:33.382Z] Copying: 241/1024 [MB] (24 MBps) [2024-11-27T12:12:34.760Z] Copying: 265/1024 [MB] (23 MBps) [2024-11-27T12:12:35.698Z] Copying: 289/1024 [MB] (24 MBps) [2024-11-27T12:12:36.635Z] Copying: 313/1024 [MB] (24 MBps) [2024-11-27T12:12:37.572Z] Copying: 337/1024 [MB] (24 MBps) [2024-11-27T12:12:38.511Z] Copying: 362/1024 [MB] (24 MBps) [2024-11-27T12:12:39.447Z] Copying: 386/1024 [MB] (24 MBps) [2024-11-27T12:12:40.383Z] Copying: 410/1024 [MB] (24 MBps) [2024-11-27T12:12:41.764Z] Copying: 432/1024 [MB] (22 MBps) [2024-11-27T12:12:42.701Z] Copying: 457/1024 [MB] (24 MBps) [2024-11-27T12:12:43.638Z] Copying: 481/1024 [MB] (24 MBps) [2024-11-27T12:12:44.575Z] Copying: 506/1024 [MB] (24 MBps) [2024-11-27T12:12:45.512Z] Copying: 531/1024 [MB] (24 MBps) [2024-11-27T12:12:46.449Z] Copying: 556/1024 [MB] (24 MBps) [2024-11-27T12:12:47.385Z] Copying: 580/1024 [MB] (24 MBps) [2024-11-27T12:12:48.761Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-27T12:12:49.703Z] Copying: 629/1024 [MB] (24 MBps) [2024-11-27T12:12:50.668Z] Copying: 653/1024 [MB] (24 MBps) [2024-11-27T12:12:51.605Z] Copying: 677/1024 [MB] (24 MBps) [2024-11-27T12:12:52.542Z] Copying: 701/1024 [MB] (24 MBps) [2024-11-27T12:12:53.479Z] Copying: 725/1024 [MB] (24 MBps) [2024-11-27T12:12:54.416Z] Copying: 749/1024 [MB] (23 MBps) [2024-11-27T12:12:55.361Z] Copying: 773/1024 [MB] (23 MBps) [2024-11-27T12:12:56.738Z] Copying: 797/1024 [MB] (24 MBps) [2024-11-27T12:12:57.676Z] Copying: 821/1024 [MB] (23 MBps) [2024-11-27T12:12:58.613Z] Copying: 846/1024 [MB] (24 MBps) [2024-11-27T12:12:59.568Z] Copying: 870/1024 [MB] (24 MBps) [2024-11-27T12:13:00.520Z] Copying: 895/1024 [MB] (24 MBps) [2024-11-27T12:13:01.457Z] Copying: 920/1024 [MB] (24 MBps) [2024-11-27T12:13:02.394Z] Copying: 945/1024 [MB] (24 MBps) [2024-11-27T12:13:03.771Z] Copying: 970/1024 [MB] (24 MBps) [2024-11-27T12:13:04.340Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-27T12:13:05.277Z] Copying: 1019/1024 [MB] (24 MBps) [2024-11-27T12:13:05.278Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-27 12:13:05.251223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.225 [2024-11-27 12:13:05.251282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:15.225 [2024-11-27 12:13:05.251298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:15.225 [2024-11-27 12:13:05.251325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.225 [2024-11-27 12:13:05.252478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:15.225 [2024-11-27 12:13:05.257927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.225 [2024-11-27 12:13:05.257965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:15.225 [2024-11-27 12:13:05.257995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.415 ms 00:29:15.225 [2024-11-27 12:13:05.258012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.225 [2024-11-27 12:13:05.267622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.225 [2024-11-27 12:13:05.267661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:15.225 [2024-11-27 12:13:05.267674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.963 ms 00:29:15.225 [2024-11-27 12:13:05.267684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.290795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.290836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:15.484 [2024-11-27 12:13:05.290851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.114 ms 00:29:15.484 [2024-11-27 12:13:05.290863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.296067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.296100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:15.484 [2024-11-27 12:13:05.296111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:29:15.484 [2024-11-27 12:13:05.296138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.331095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.331135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:15.484 [2024-11-27 12:13:05.331147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.955 ms 00:29:15.484 [2024-11-27 12:13:05.331157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.351602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.351649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:15.484 [2024-11-27 12:13:05.351679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.424 ms 00:29:15.484 [2024-11-27 12:13:05.351690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.471322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.471383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:15.484 [2024-11-27 12:13:05.471405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.786 ms 00:29:15.484 [2024-11-27 12:13:05.471415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.484 [2024-11-27 12:13:05.505808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.484 [2024-11-27 12:13:05.505842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:15.484 [2024-11-27 12:13:05.505870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.431 ms 00:29:15.484 [2024-11-27 12:13:05.505892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.745 [2024-11-27 12:13:05.540942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.745 [2024-11-27 12:13:05.540975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:15.745 [2024-11-27 12:13:05.540987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.070 ms 00:29:15.745 [2024-11-27 12:13:05.540996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.745 [2024-11-27 12:13:05.575262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.745 [2024-11-27 12:13:05.575294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:15.745 [2024-11-27 12:13:05.575306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.269 ms 00:29:15.745 [2024-11-27 12:13:05.575315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.745 [2024-11-27 12:13:05.610209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.745 [2024-11-27 12:13:05.610244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:15.745 [2024-11-27 12:13:05.610257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.855 ms 00:29:15.745 [2024-11-27 12:13:05.610266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.745 [2024-11-27 12:13:05.610303] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:15.745 [2024-11-27 12:13:05.610318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:29:15.745 [2024-11-27 12:13:05.610331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.610997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:15.745 [2024-11-27 12:13:05.611111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:15.746 [2024-11-27 12:13:05.611385] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:15.746 [2024-11-27 12:13:05.611395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36b3d101-e517-4a51-840e-d1a112a8f9ea 00:29:15.746 [2024-11-27 12:13:05.611422] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:29:15.746 [2024-11-27 12:13:05.611432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:29:15.746 [2024-11-27 12:13:05.611442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:29:15.746 [2024-11-27 12:13:05.611452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:29:15.746 [2024-11-27 12:13:05.611462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:15.746 [2024-11-27 12:13:05.611472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:15.746 [2024-11-27 12:13:05.611482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:15.746 [2024-11-27 12:13:05.611490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:15.746 [2024-11-27 12:13:05.611499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:15.746 [2024-11-27 12:13:05.611509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.746 [2024-11-27 12:13:05.611520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:15.746 [2024-11-27 12:13:05.611530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:29:15.746 [2024-11-27 12:13:05.611539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.631662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.746 [2024-11-27 12:13:05.631697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:15.746 [2024-11-27 12:13:05.631710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.120 ms 00:29:15.746 [2024-11-27 12:13:05.631720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.632265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.746 [2024-11-27 12:13:05.632293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:15.746 [2024-11-27 12:13:05.632308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:29:15.746 [2024-11-27 12:13:05.632318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.681949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.746 [2024-11-27 12:13:05.681981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.746 [2024-11-27 12:13:05.682009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.746 [2024-11-27 12:13:05.682020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.682072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.746 [2024-11-27 12:13:05.682083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.746 [2024-11-27 12:13:05.682098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.746 [2024-11-27 12:13:05.682108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.682169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.746 [2024-11-27 12:13:05.682181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.746 [2024-11-27 12:13:05.682192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.746 [2024-11-27 12:13:05.682201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.746 [2024-11-27 12:13:05.682217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.746 [2024-11-27 12:13:05.682228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.746 [2024-11-27 12:13:05.682237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.746 [2024-11-27 12:13:05.682247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.799642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.799693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.006 [2024-11-27 12:13:05.799724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.799734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.006 [2024-11-27 12:13:05.893098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.006 [2024-11-27 12:13:05.893235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.006 [2024-11-27 12:13:05.893301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.006 [2024-11-27 12:13:05.893492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:16.006 [2024-11-27 12:13:05.893560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.006 [2024-11-27 12:13:05.893632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.006 [2024-11-27 12:13:05.893702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.006 [2024-11-27 12:13:05.893712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.006 [2024-11-27 12:13:05.893722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.006 [2024-11-27 12:13:05.893843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.857 ms, result 0 00:29:17.385 00:29:17.385 00:29:17.386 12:13:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:19.290 12:13:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:19.291 [2024-11-27 12:13:08.976882] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:19.291 [2024-11-27 12:13:08.977170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82540 ] 00:29:19.291 [2024-11-27 12:13:09.156848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.291 [2024-11-27 12:13:09.265740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.867 [2024-11-27 12:13:09.617699] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.867 [2024-11-27 12:13:09.617778] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.867 [2024-11-27 12:13:09.779628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.867 [2024-11-27 12:13:09.779675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:19.867 [2024-11-27 12:13:09.779691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:19.867 [2024-11-27 12:13:09.779700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.867 [2024-11-27 12:13:09.779744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.867 [2024-11-27 12:13:09.779758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:19.867 [2024-11-27 12:13:09.779769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:19.867 [2024-11-27 12:13:09.779778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.867 [2024-11-27 12:13:09.779799] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:19.867 [2024-11-27 12:13:09.780792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:19.867 [2024-11-27 12:13:09.780813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.867 [2024-11-27 12:13:09.780824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:19.867 [2024-11-27 12:13:09.780835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:29:19.867 [2024-11-27 12:13:09.780845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.782338] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:19.868 [2024-11-27 12:13:09.801069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.801115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:19.868 [2024-11-27 12:13:09.801131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.761 ms 00:29:19.868 [2024-11-27 12:13:09.801141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.801206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.801218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:19.868 [2024-11-27 12:13:09.801228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:19.868 [2024-11-27 12:13:09.801238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.808130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.808152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:19.868 [2024-11-27 12:13:09.808164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:29:19.868 [2024-11-27 12:13:09.808178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.808253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.808265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:19.868 [2024-11-27 12:13:09.808275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:19.868 [2024-11-27 12:13:09.808285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.808323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.808334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:19.868 [2024-11-27 12:13:09.808343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:19.868 [2024-11-27 12:13:09.808353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.808393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:19.868 [2024-11-27 12:13:09.813282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.813310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:19.868 [2024-11-27 12:13:09.813325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.902 ms 00:29:19.868 [2024-11-27 12:13:09.813335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.813372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.813384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:19.868 [2024-11-27 12:13:09.813395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:19.868 [2024-11-27 12:13:09.813405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.813471] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:19.868 [2024-11-27 12:13:09.813494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:19.868 [2024-11-27 12:13:09.813531] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:19.868 [2024-11-27 12:13:09.813553] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:19.868 [2024-11-27 12:13:09.813641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:19.868 [2024-11-27 12:13:09.813655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:19.868 [2024-11-27 12:13:09.813668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:19.868 [2024-11-27 12:13:09.813690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:19.868 [2024-11-27 12:13:09.813703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:19.868 [2024-11-27 12:13:09.813715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:19.868 [2024-11-27 12:13:09.813725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:19.868 [2024-11-27 12:13:09.813739] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:19.868 [2024-11-27 12:13:09.813749] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:19.868 [2024-11-27 12:13:09.813759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.813769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:19.868 [2024-11-27 12:13:09.813781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:19.868 [2024-11-27 12:13:09.813791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.813865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.868 [2024-11-27 12:13:09.813876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:19.868 [2024-11-27 12:13:09.813887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:19.868 [2024-11-27 12:13:09.813896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.868 [2024-11-27 12:13:09.813993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:19.868 [2024-11-27 12:13:09.814007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:19.868 [2024-11-27 12:13:09.814018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:19.868 [2024-11-27 12:13:09.814049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:19.868 [2024-11-27 12:13:09.814081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.868 [2024-11-27 12:13:09.814102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:19.868 [2024-11-27 12:13:09.814112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:19.868 [2024-11-27 12:13:09.814121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.868 [2024-11-27 12:13:09.814141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:19.868 [2024-11-27 12:13:09.814151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:19.868 [2024-11-27 12:13:09.814161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:19.868 [2024-11-27 12:13:09.814180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:19.868 [2024-11-27 12:13:09.814209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:19.868 [2024-11-27 12:13:09.814236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:19.868 [2024-11-27 12:13:09.814264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:19.868 [2024-11-27 12:13:09.814291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:19.868 [2024-11-27 12:13:09.814319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.868 [2024-11-27 12:13:09.814337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:19.868 [2024-11-27 12:13:09.814346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:19.868 [2024-11-27 12:13:09.814354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.868 [2024-11-27 12:13:09.814378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:19.868 [2024-11-27 12:13:09.814389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:19.868 [2024-11-27 12:13:09.814398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:19.868 [2024-11-27 12:13:09.814417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:19.868 [2024-11-27 12:13:09.814427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:19.868 [2024-11-27 12:13:09.814448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:19.868 [2024-11-27 12:13:09.814458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.868 [2024-11-27 12:13:09.814478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:19.868 [2024-11-27 12:13:09.814488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:19.868 [2024-11-27 12:13:09.814497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:19.868 [2024-11-27 12:13:09.814506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:19.868 [2024-11-27 12:13:09.814516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:19.868 [2024-11-27 12:13:09.814525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:19.868 [2024-11-27 12:13:09.814535] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:19.868 [2024-11-27 12:13:09.814548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:19.869 [2024-11-27 12:13:09.814574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:19.869 [2024-11-27 12:13:09.814596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:19.869 [2024-11-27 12:13:09.814606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:19.869 [2024-11-27 12:13:09.814617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:19.869 [2024-11-27 12:13:09.814627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:19.869 [2024-11-27 12:13:09.814637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:19.869 [2024-11-27 12:13:09.814648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:19.869 [2024-11-27 12:13:09.814659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:19.869 [2024-11-27 12:13:09.814669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:19.869 [2024-11-27 12:13:09.814720] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:19.869 [2024-11-27 12:13:09.814732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.869 [2024-11-27 12:13:09.814753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:19.869 [2024-11-27 12:13:09.814764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:19.869 [2024-11-27 12:13:09.814774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:19.869 [2024-11-27 12:13:09.814784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.869 [2024-11-27 12:13:09.814795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:19.869 [2024-11-27 12:13:09.814805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:29:19.869 [2024-11-27 12:13:09.814814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.869 [2024-11-27 12:13:09.853898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.869 [2024-11-27 12:13:09.853928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:19.869 [2024-11-27 12:13:09.853941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.101 ms 00:29:19.869 [2024-11-27 12:13:09.853956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.869 [2024-11-27 12:13:09.854029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.869 [2024-11-27 12:13:09.854039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:19.869 [2024-11-27 12:13:09.854049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:19.869 [2024-11-27 12:13:09.854058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-27 12:13:09.928685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-27 12:13:09.928719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.126 [2024-11-27 12:13:09.928732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.691 ms 00:29:20.126 [2024-11-27 12:13:09.928742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-27 12:13:09.928782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-27 12:13:09.928794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.127 [2024-11-27 12:13:09.928809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:20.127 [2024-11-27 12:13:09.928818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:09.929301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:09.929315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.127 [2024-11-27 12:13:09.929326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:29:20.127 [2024-11-27 12:13:09.929335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:09.929480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:09.929494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.127 [2024-11-27 12:13:09.929511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:20.127 [2024-11-27 12:13:09.929521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:09.947696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:09.947727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.127 [2024-11-27 12:13:09.947740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.183 ms 00:29:20.127 [2024-11-27 12:13:09.947750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:09.966058] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:20.127 [2024-11-27 12:13:09.966091] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:20.127 [2024-11-27 12:13:09.966121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:09.966132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:20.127 [2024-11-27 12:13:09.966144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.303 ms 00:29:20.127 [2024-11-27 12:13:09.966154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:09.994081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:09.994117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:20.127 [2024-11-27 12:13:09.994132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.929 ms 00:29:20.127 [2024-11-27 12:13:09.994142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.012396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.012428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:20.127 [2024-11-27 12:13:10.012441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.240 ms 00:29:20.127 [2024-11-27 12:13:10.012450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.029980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.030011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:20.127 [2024-11-27 12:13:10.030024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.520 ms 00:29:20.127 [2024-11-27 12:13:10.030034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.030804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.030823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:20.127 [2024-11-27 12:13:10.030839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:29:20.127 [2024-11-27 12:13:10.030849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.113699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.113767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:20.127 [2024-11-27 12:13:10.113807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.963 ms 00:29:20.127 [2024-11-27 12:13:10.113818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.123911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:20.127 [2024-11-27 12:13:10.126148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.126176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:20.127 [2024-11-27 12:13:10.126188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.303 ms 00:29:20.127 [2024-11-27 12:13:10.126199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.126274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.126287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:20.127 [2024-11-27 12:13:10.126302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:20.127 [2024-11-27 12:13:10.126311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.127833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.127863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:20.127 [2024-11-27 12:13:10.127875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:29:20.127 [2024-11-27 12:13:10.127885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.127910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.127921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:20.127 [2024-11-27 12:13:10.127932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:20.127 [2024-11-27 12:13:10.127942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.127985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:20.127 [2024-11-27 12:13:10.127998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.128008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:20.127 [2024-11-27 12:13:10.128018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:20.127 [2024-11-27 12:13:10.128027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.162789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.162821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:20.127 [2024-11-27 12:13:10.162841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.798 ms 00:29:20.127 [2024-11-27 12:13:10.162850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.162921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-27 12:13:10.162933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:20.127 [2024-11-27 12:13:10.162944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:20.127 [2024-11-27 12:13:10.162954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-27 12:13:10.164121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.665 ms, result 0 00:29:21.503  [2024-11-27T12:13:12.493Z] Copying: 1252/1048576 [kB] (1252 kBps) [2024-11-27T12:13:13.431Z] Copying: 10328/1048576 [kB] (9076 kBps) [2024-11-27T12:13:14.367Z] Copying: 42/1024 [MB] (32 MBps) [2024-11-27T12:13:15.744Z] Copying: 75/1024 [MB] (32 MBps) [2024-11-27T12:13:16.681Z] Copying: 108/1024 [MB] (32 MBps) [2024-11-27T12:13:17.618Z] Copying: 140/1024 [MB] (31 MBps) [2024-11-27T12:13:18.556Z] Copying: 172/1024 [MB] (32 MBps) [2024-11-27T12:13:19.492Z] Copying: 205/1024 [MB] (33 MBps) [2024-11-27T12:13:20.431Z] Copying: 239/1024 [MB] (33 MBps) [2024-11-27T12:13:21.370Z] Copying: 272/1024 [MB] (33 MBps) [2024-11-27T12:13:22.750Z] Copying: 304/1024 [MB] (32 MBps) [2024-11-27T12:13:23.688Z] Copying: 336/1024 [MB] (32 MBps) [2024-11-27T12:13:24.625Z] Copying: 369/1024 [MB] (32 MBps) [2024-11-27T12:13:25.562Z] Copying: 401/1024 [MB] (32 MBps) [2024-11-27T12:13:26.499Z] Copying: 432/1024 [MB] (30 MBps) [2024-11-27T12:13:27.440Z] Copying: 462/1024 [MB] (29 MBps) [2024-11-27T12:13:28.391Z] Copying: 492/1024 [MB] (30 MBps) [2024-11-27T12:13:29.769Z] Copying: 522/1024 [MB] (29 MBps) [2024-11-27T12:13:30.708Z] Copying: 552/1024 [MB] (30 MBps) [2024-11-27T12:13:31.646Z] Copying: 584/1024 [MB] (32 MBps) [2024-11-27T12:13:32.585Z] Copying: 615/1024 [MB] (30 MBps) [2024-11-27T12:13:33.524Z] Copying: 645/1024 [MB] (30 MBps) [2024-11-27T12:13:34.463Z] Copying: 675/1024 [MB] (30 MBps) [2024-11-27T12:13:35.401Z] Copying: 706/1024 [MB] (30 MBps) [2024-11-27T12:13:36.339Z] Copying: 736/1024 [MB] (30 MBps) [2024-11-27T12:13:37.719Z] Copying: 766/1024 [MB] (30 MBps) [2024-11-27T12:13:38.659Z] Copying: 797/1024 [MB] (30 MBps) [2024-11-27T12:13:39.597Z] Copying: 827/1024 [MB] (30 MBps) [2024-11-27T12:13:40.535Z] Copying: 857/1024 [MB] (30 MBps) [2024-11-27T12:13:41.474Z] Copying: 886/1024 [MB] (29 MBps) [2024-11-27T12:13:42.413Z] Copying: 917/1024 [MB] (30 MBps) [2024-11-27T12:13:43.352Z] Copying: 947/1024 [MB] (30 MBps) [2024-11-27T12:13:44.731Z] Copying: 977/1024 [MB] (30 MBps) [2024-11-27T12:13:44.991Z] Copying: 1007/1024 [MB] (30 MBps) [2024-11-27T12:13:45.562Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-27 12:13:45.275267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.275351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:55.509 [2024-11-27 12:13:45.275386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:55.509 [2024-11-27 12:13:45.275398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.275427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:55.509 [2024-11-27 12:13:45.280245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.280298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:55.509 [2024-11-27 12:13:45.280313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:29:55.509 [2024-11-27 12:13:45.280325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.280665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.280702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:55.509 [2024-11-27 12:13:45.280716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:29:55.509 [2024-11-27 12:13:45.280728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.291607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.291659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:55.509 [2024-11-27 12:13:45.291676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.873 ms 00:29:55.509 [2024-11-27 12:13:45.291690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.297051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.297097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:55.509 [2024-11-27 12:13:45.297121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.328 ms 00:29:55.509 [2024-11-27 12:13:45.297134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.332728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.332775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:55.509 [2024-11-27 12:13:45.332803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.580 ms 00:29:55.509 [2024-11-27 12:13:45.332815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.353761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.353808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:55.509 [2024-11-27 12:13:45.353825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.936 ms 00:29:55.509 [2024-11-27 12:13:45.353837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.356239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.356281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:55.509 [2024-11-27 12:13:45.356296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:29:55.509 [2024-11-27 12:13:45.356318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.391676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.391717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:55.509 [2024-11-27 12:13:45.391733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.394 ms 00:29:55.509 [2024-11-27 12:13:45.391745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.426255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.426297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:55.509 [2024-11-27 12:13:45.426313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.523 ms 00:29:55.509 [2024-11-27 12:13:45.426325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.460104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.460146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:55.509 [2024-11-27 12:13:45.460161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.782 ms 00:29:55.509 [2024-11-27 12:13:45.460172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.493948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.509 [2024-11-27 12:13:45.493991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:55.509 [2024-11-27 12:13:45.494006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.745 ms 00:29:55.509 [2024-11-27 12:13:45.494017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.509 [2024-11-27 12:13:45.494059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:55.509 [2024-11-27 12:13:45.494080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:55.509 [2024-11-27 12:13:45.494095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:55.509 [2024-11-27 12:13:45.494108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:55.509 [2024-11-27 12:13:45.494387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.494989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:55.510 [2024-11-27 12:13:45.495319] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:55.510 [2024-11-27 12:13:45.495330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36b3d101-e517-4a51-840e-d1a112a8f9ea 00:29:55.510 [2024-11-27 12:13:45.495342] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:55.510 [2024-11-27 12:13:45.495353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156352 00:29:55.510 [2024-11-27 12:13:45.495380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154368 00:29:55.510 [2024-11-27 12:13:45.495392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:29:55.510 [2024-11-27 12:13:45.495403] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:55.510 [2024-11-27 12:13:45.495429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:55.510 [2024-11-27 12:13:45.495441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:55.510 [2024-11-27 12:13:45.495452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:55.510 [2024-11-27 12:13:45.495463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:55.510 [2024-11-27 12:13:45.495474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.510 [2024-11-27 12:13:45.495485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:55.510 [2024-11-27 12:13:45.495497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:29:55.510 [2024-11-27 12:13:45.495508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.510 [2024-11-27 12:13:45.515397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.510 [2024-11-27 12:13:45.515436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:55.510 [2024-11-27 12:13:45.515450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.882 ms 00:29:55.511 [2024-11-27 12:13:45.515462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.511 [2024-11-27 12:13:45.516061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.511 [2024-11-27 12:13:45.516084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:55.511 [2024-11-27 12:13:45.516098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:29:55.511 [2024-11-27 12:13:45.516110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.567702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.567742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.770 [2024-11-27 12:13:45.567757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.567771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.567834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.567847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.770 [2024-11-27 12:13:45.567859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.567871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.567961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.567976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.770 [2024-11-27 12:13:45.567989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.568001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.568021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.568034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.770 [2024-11-27 12:13:45.568046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.568058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.696368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.696425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.770 [2024-11-27 12:13:45.696444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.696456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.797725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.797780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.770 [2024-11-27 12:13:45.797798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.797812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.797927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.797951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.770 [2024-11-27 12:13:45.797964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.797975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.798028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.798041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.770 [2024-11-27 12:13:45.798053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.798065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.798199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.798215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.770 [2024-11-27 12:13:45.798233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.798245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.798296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.770 [2024-11-27 12:13:45.798310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:55.770 [2024-11-27 12:13:45.798322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.770 [2024-11-27 12:13:45.798335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.770 [2024-11-27 12:13:45.798412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.771 [2024-11-27 12:13:45.798428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.771 [2024-11-27 12:13:45.798446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.771 [2024-11-27 12:13:45.798459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.771 [2024-11-27 12:13:45.798517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.771 [2024-11-27 12:13:45.798531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.771 [2024-11-27 12:13:45.798544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.771 [2024-11-27 12:13:45.798556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.771 [2024-11-27 12:13:45.798754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.357 ms, result 0 00:29:57.151 00:29:57.151 00:29:57.151 12:13:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:58.528 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:58.528 12:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:58.528 [2024-11-27 12:13:48.535622] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:58.528 [2024-11-27 12:13:48.535759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82937 ] 00:29:58.787 [2024-11-27 12:13:48.713152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.787 [2024-11-27 12:13:48.821757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.365 [2024-11-27 12:13:49.162117] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.365 [2024-11-27 12:13:49.162186] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.365 [2024-11-27 12:13:49.324623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.365 [2024-11-27 12:13:49.324673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.365 [2024-11-27 12:13:49.324689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.365 [2024-11-27 12:13:49.324699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.365 [2024-11-27 12:13:49.324747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.365 [2024-11-27 12:13:49.324762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.365 [2024-11-27 12:13:49.324772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:59.365 [2024-11-27 12:13:49.324782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.365 [2024-11-27 12:13:49.324802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.365 [2024-11-27 12:13:49.325810] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.366 [2024-11-27 12:13:49.325840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.325851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.366 [2024-11-27 12:13:49.325862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:29:59.366 [2024-11-27 12:13:49.325873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.366 [2024-11-27 12:13:49.327463] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:59.366 [2024-11-27 12:13:49.346031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.346069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.366 [2024-11-27 12:13:49.346083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.598 ms 00:29:59.366 [2024-11-27 12:13:49.346095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.366 [2024-11-27 12:13:49.346163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.346176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.366 [2024-11-27 12:13:49.346187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:59.366 [2024-11-27 12:13:49.346197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.366 [2024-11-27 12:13:49.353284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.353314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.366 [2024-11-27 12:13:49.353342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.024 ms 00:29:59.366 [2024-11-27 12:13:49.353366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.366 [2024-11-27 12:13:49.353455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.353468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.366 [2024-11-27 12:13:49.353479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:59.366 [2024-11-27 12:13:49.353488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.366 [2024-11-27 12:13:49.353529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.366 [2024-11-27 12:13:49.353540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.367 [2024-11-27 12:13:49.353551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:59.367 [2024-11-27 12:13:49.353560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.367 [2024-11-27 12:13:49.353589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.367 [2024-11-27 12:13:49.358437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.367 [2024-11-27 12:13:49.358470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.367 [2024-11-27 12:13:49.358486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:29:59.367 [2024-11-27 12:13:49.358496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.367 [2024-11-27 12:13:49.358527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.367 [2024-11-27 12:13:49.358539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.367 [2024-11-27 12:13:49.358549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:59.367 [2024-11-27 12:13:49.358559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.367 [2024-11-27 12:13:49.358612] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.367 [2024-11-27 12:13:49.358635] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.367 [2024-11-27 12:13:49.358671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.367 [2024-11-27 12:13:49.358693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:59.367 [2024-11-27 12:13:49.358782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.367 [2024-11-27 12:13:49.358796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.367 [2024-11-27 12:13:49.358809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:59.367 [2024-11-27 12:13:49.358822] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.367 [2024-11-27 12:13:49.358834] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.367 [2024-11-27 12:13:49.358844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.367 [2024-11-27 12:13:49.358855] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.367 [2024-11-27 12:13:49.358868] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.367 [2024-11-27 12:13:49.358878] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.367 [2024-11-27 12:13:49.358888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.367 [2024-11-27 12:13:49.358898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.367 [2024-11-27 12:13:49.358908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:29:59.367 [2024-11-27 12:13:49.358918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.367 [2024-11-27 12:13:49.358988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.367 [2024-11-27 12:13:49.359003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.367 [2024-11-27 12:13:49.359013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:59.367 [2024-11-27 12:13:49.359023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.367 [2024-11-27 12:13:49.359120] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.367 [2024-11-27 12:13:49.359135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.367 [2024-11-27 12:13:49.359146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.367 [2024-11-27 12:13:49.359156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.367 [2024-11-27 12:13:49.359175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.367 [2024-11-27 12:13:49.359195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.367 [2024-11-27 12:13:49.359204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.367 [2024-11-27 12:13:49.359223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.367 [2024-11-27 12:13:49.359233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.367 [2024-11-27 12:13:49.359242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.367 [2024-11-27 12:13:49.359261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.367 [2024-11-27 12:13:49.359270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.367 [2024-11-27 12:13:49.359280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.367 [2024-11-27 12:13:49.359299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.367 [2024-11-27 12:13:49.359308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.367 [2024-11-27 12:13:49.359326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.367 [2024-11-27 12:13:49.359336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.367 [2024-11-27 12:13:49.359345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.368 [2024-11-27 12:13:49.359366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.368 [2024-11-27 12:13:49.359386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.368 [2024-11-27 12:13:49.359395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.368 [2024-11-27 12:13:49.359414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.368 [2024-11-27 12:13:49.359423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.368 [2024-11-27 12:13:49.359442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.368 [2024-11-27 12:13:49.359451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.368 [2024-11-27 12:13:49.359469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.368 [2024-11-27 12:13:49.359478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.368 [2024-11-27 12:13:49.359487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.368 [2024-11-27 12:13:49.359496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.368 [2024-11-27 12:13:49.359506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.368 [2024-11-27 12:13:49.359515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.368 [2024-11-27 12:13:49.359533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.368 [2024-11-27 12:13:49.359542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.368 [2024-11-27 12:13:49.359561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.368 [2024-11-27 12:13:49.359571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.368 [2024-11-27 12:13:49.359580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.368 [2024-11-27 12:13:49.359591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.368 [2024-11-27 12:13:49.359600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.368 [2024-11-27 12:13:49.359609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.368 [2024-11-27 12:13:49.359618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.368 [2024-11-27 12:13:49.359627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.368 [2024-11-27 12:13:49.359637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.368 [2024-11-27 12:13:49.359647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.368 [2024-11-27 12:13:49.359659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.368 [2024-11-27 12:13:49.359685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.368 [2024-11-27 12:13:49.359696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.368 [2024-11-27 12:13:49.359707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.368 [2024-11-27 12:13:49.359717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.368 [2024-11-27 12:13:49.359727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.368 [2024-11-27 12:13:49.359737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.368 [2024-11-27 12:13:49.359747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.368 [2024-11-27 12:13:49.359757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.368 [2024-11-27 12:13:49.359768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.368 [2024-11-27 12:13:49.359819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.368 [2024-11-27 12:13:49.359832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.368 [2024-11-27 12:13:49.359853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.368 [2024-11-27 12:13:49.359863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.368 [2024-11-27 12:13:49.359874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.368 [2024-11-27 12:13:49.359885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.368 [2024-11-27 12:13:49.359895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.368 [2024-11-27 12:13:49.359905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:29:59.368 [2024-11-27 12:13:49.359916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.368 [2024-11-27 12:13:49.398850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.368 [2024-11-27 12:13:49.398886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.368 [2024-11-27 12:13:49.398900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.952 ms 00:29:59.368 [2024-11-27 12:13:49.398915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.368 [2024-11-27 12:13:49.398996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.368 [2024-11-27 12:13:49.399007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.368 [2024-11-27 12:13:49.399018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:59.368 [2024-11-27 12:13:49.399028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.455562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.455597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.627 [2024-11-27 12:13:49.455611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.561 ms 00:29:59.627 [2024-11-27 12:13:49.455621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.455673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.455684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.627 [2024-11-27 12:13:49.455700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:59.627 [2024-11-27 12:13:49.455710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.456213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.456235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.627 [2024-11-27 12:13:49.456246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:59.627 [2024-11-27 12:13:49.456256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.456394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.456409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.627 [2024-11-27 12:13:49.456426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:29:59.627 [2024-11-27 12:13:49.456436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.475721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.475757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.627 [2024-11-27 12:13:49.475770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.295 ms 00:29:59.627 [2024-11-27 12:13:49.475780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.494652] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:59.627 [2024-11-27 12:13:49.494690] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:59.627 [2024-11-27 12:13:49.494704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.494715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:59.627 [2024-11-27 12:13:49.494727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.835 ms 00:29:59.627 [2024-11-27 12:13:49.494737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.522402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.522438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:59.627 [2024-11-27 12:13:49.522468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.666 ms 00:29:59.627 [2024-11-27 12:13:49.522478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.539623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.539657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:59.627 [2024-11-27 12:13:49.539669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.116 ms 00:29:59.627 [2024-11-27 12:13:49.539678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.557156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.557190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:59.627 [2024-11-27 12:13:49.557203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.451 ms 00:29:59.627 [2024-11-27 12:13:49.557212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.627 [2024-11-27 12:13:49.557962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.627 [2024-11-27 12:13:49.557989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.628 [2024-11-27 12:13:49.558005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:29:59.628 [2024-11-27 12:13:49.558015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.640534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.640582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:59.628 [2024-11-27 12:13:49.640621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.632 ms 00:29:59.628 [2024-11-27 12:13:49.640633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.651243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.628 [2024-11-27 12:13:49.653858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.653891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.628 [2024-11-27 12:13:49.653920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.198 ms 00:29:59.628 [2024-11-27 12:13:49.653932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.654015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.654029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:59.628 [2024-11-27 12:13:49.654044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:59.628 [2024-11-27 12:13:49.654054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.654925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.654948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.628 [2024-11-27 12:13:49.654959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:29:59.628 [2024-11-27 12:13:49.654969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.654992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.655003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.628 [2024-11-27 12:13:49.655013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:59.628 [2024-11-27 12:13:49.655023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.628 [2024-11-27 12:13:49.655063] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:59.628 [2024-11-27 12:13:49.655076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.628 [2024-11-27 12:13:49.655086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:59.628 [2024-11-27 12:13:49.655096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:59.628 [2024-11-27 12:13:49.655105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.887 [2024-11-27 12:13:49.690401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.887 [2024-11-27 12:13:49.690442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.887 [2024-11-27 12:13:49.690462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.333 ms 00:29:59.887 [2024-11-27 12:13:49.690473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.887 [2024-11-27 12:13:49.690548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.887 [2024-11-27 12:13:49.690561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.887 [2024-11-27 12:13:49.690572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:59.887 [2024-11-27 12:13:49.690583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.887 [2024-11-27 12:13:49.691685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.213 ms, result 0 00:30:01.261  [2024-11-27T12:13:51.926Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-27T12:13:53.301Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-27T12:13:54.235Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-27T12:13:55.170Z] Copying: 102/1024 [MB] (25 MBps) [2024-11-27T12:13:56.107Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-27T12:13:57.116Z] Copying: 154/1024 [MB] (25 MBps) [2024-11-27T12:13:58.053Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-27T12:13:58.988Z] Copying: 207/1024 [MB] (26 MBps) [2024-11-27T12:13:59.924Z] Copying: 233/1024 [MB] (26 MBps) [2024-11-27T12:14:01.302Z] Copying: 260/1024 [MB] (26 MBps) [2024-11-27T12:14:02.238Z] Copying: 286/1024 [MB] (26 MBps) [2024-11-27T12:14:03.175Z] Copying: 313/1024 [MB] (26 MBps) [2024-11-27T12:14:04.113Z] Copying: 339/1024 [MB] (26 MBps) [2024-11-27T12:14:05.049Z] Copying: 366/1024 [MB] (26 MBps) [2024-11-27T12:14:05.985Z] Copying: 392/1024 [MB] (26 MBps) [2024-11-27T12:14:06.921Z] Copying: 419/1024 [MB] (26 MBps) [2024-11-27T12:14:08.298Z] Copying: 445/1024 [MB] (26 MBps) [2024-11-27T12:14:08.865Z] Copying: 470/1024 [MB] (25 MBps) [2024-11-27T12:14:10.240Z] Copying: 495/1024 [MB] (24 MBps) [2024-11-27T12:14:11.202Z] Copying: 521/1024 [MB] (25 MBps) [2024-11-27T12:14:12.137Z] Copying: 546/1024 [MB] (25 MBps) [2024-11-27T12:14:13.111Z] Copying: 573/1024 [MB] (26 MBps) [2024-11-27T12:14:14.044Z] Copying: 599/1024 [MB] (26 MBps) [2024-11-27T12:14:14.978Z] Copying: 626/1024 [MB] (26 MBps) [2024-11-27T12:14:15.913Z] Copying: 652/1024 [MB] (26 MBps) [2024-11-27T12:14:17.290Z] Copying: 679/1024 [MB] (26 MBps) [2024-11-27T12:14:17.857Z] Copying: 705/1024 [MB] (26 MBps) [2024-11-27T12:14:19.232Z] Copying: 730/1024 [MB] (25 MBps) [2024-11-27T12:14:20.167Z] Copying: 757/1024 [MB] (26 MBps) [2024-11-27T12:14:21.103Z] Copying: 783/1024 [MB] (26 MBps) [2024-11-27T12:14:22.040Z] Copying: 810/1024 [MB] (26 MBps) [2024-11-27T12:14:22.975Z] Copying: 836/1024 [MB] (26 MBps) [2024-11-27T12:14:23.911Z] Copying: 862/1024 [MB] (25 MBps) [2024-11-27T12:14:24.848Z] Copying: 888/1024 [MB] (26 MBps) [2024-11-27T12:14:26.273Z] Copying: 914/1024 [MB] (26 MBps) [2024-11-27T12:14:26.839Z] Copying: 940/1024 [MB] (25 MBps) [2024-11-27T12:14:28.217Z] Copying: 966/1024 [MB] (25 MBps) [2024-11-27T12:14:29.155Z] Copying: 993/1024 [MB] (26 MBps) [2024-11-27T12:14:29.155Z] Copying: 1019/1024 [MB] (26 MBps) [2024-11-27T12:14:29.155Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-27 12:14:29.024319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.024382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:39.102 [2024-11-27 12:14:29.024399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:39.102 [2024-11-27 12:14:29.024409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.024430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:39.102 [2024-11-27 12:14:29.028920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.028967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:39.102 [2024-11-27 12:14:29.028981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:30:39.102 [2024-11-27 12:14:29.028991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.029176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.029188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:39.102 [2024-11-27 12:14:29.029200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:30:39.102 [2024-11-27 12:14:29.029210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.032108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.032134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:39.102 [2024-11-27 12:14:29.032151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:30:39.102 [2024-11-27 12:14:29.032377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.037694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.037748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:39.102 [2024-11-27 12:14:29.037761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.304 ms 00:30:39.102 [2024-11-27 12:14:29.037771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.072380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.072421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:39.102 [2024-11-27 12:14:29.072435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.584 ms 00:30:39.102 [2024-11-27 12:14:29.072444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.092077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.092114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:39.102 [2024-11-27 12:14:29.092126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.610 ms 00:30:39.102 [2024-11-27 12:14:29.092136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.102 [2024-11-27 12:14:29.094130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.102 [2024-11-27 12:14:29.094166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:39.103 [2024-11-27 12:14:29.094179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:30:39.103 [2024-11-27 12:14:29.094189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.103 [2024-11-27 12:14:29.128969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.103 [2024-11-27 12:14:29.129004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:39.103 [2024-11-27 12:14:29.129016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.820 ms 00:30:39.103 [2024-11-27 12:14:29.129024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.362 [2024-11-27 12:14:29.164031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.362 [2024-11-27 12:14:29.164065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:39.362 [2024-11-27 12:14:29.164077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.010 ms 00:30:39.362 [2024-11-27 12:14:29.164086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.362 [2024-11-27 12:14:29.197502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.362 [2024-11-27 12:14:29.197539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:39.362 [2024-11-27 12:14:29.197567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.417 ms 00:30:39.362 [2024-11-27 12:14:29.197576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.362 [2024-11-27 12:14:29.231551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.362 [2024-11-27 12:14:29.231592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:39.362 [2024-11-27 12:14:29.231604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.955 ms 00:30:39.362 [2024-11-27 12:14:29.231613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.362 [2024-11-27 12:14:29.231664] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.362 [2024-11-27 12:14:29.231686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:39.362 [2024-11-27 12:14:29.231709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:39.362 [2024-11-27 12:14:29.231720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.231991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.362 [2024-11-27 12:14:29.232260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.363 [2024-11-27 12:14:29.232776] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.363 [2024-11-27 12:14:29.232786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36b3d101-e517-4a51-840e-d1a112a8f9ea 00:30:39.363 [2024-11-27 12:14:29.232798] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:39.363 [2024-11-27 12:14:29.232807] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:39.363 [2024-11-27 12:14:29.232817] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:39.363 [2024-11-27 12:14:29.232828] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:39.363 [2024-11-27 12:14:29.232848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.363 [2024-11-27 12:14:29.232859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.363 [2024-11-27 12:14:29.232868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.363 [2024-11-27 12:14:29.232877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.363 [2024-11-27 12:14:29.232886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.363 [2024-11-27 12:14:29.232895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.363 [2024-11-27 12:14:29.232905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.363 [2024-11-27 12:14:29.232916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:30:39.363 [2024-11-27 12:14:29.232931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.252684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.363 [2024-11-27 12:14:29.252715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.363 [2024-11-27 12:14:29.252728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.749 ms 00:30:39.363 [2024-11-27 12:14:29.252738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.253311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.363 [2024-11-27 12:14:29.253336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.363 [2024-11-27 12:14:29.253347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:30:39.363 [2024-11-27 12:14:29.253372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.302947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.363 [2024-11-27 12:14:29.302985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.363 [2024-11-27 12:14:29.302997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.363 [2024-11-27 12:14:29.303007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.303073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.363 [2024-11-27 12:14:29.303089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.363 [2024-11-27 12:14:29.303099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.363 [2024-11-27 12:14:29.303109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.303173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.363 [2024-11-27 12:14:29.303186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.363 [2024-11-27 12:14:29.303197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.363 [2024-11-27 12:14:29.303206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.363 [2024-11-27 12:14:29.303223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.363 [2024-11-27 12:14:29.303234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.363 [2024-11-27 12:14:29.303249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.363 [2024-11-27 12:14:29.303259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.422454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.422502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.623 [2024-11-27 12:14:29.422517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.422528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.518961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.623 [2024-11-27 12:14:29.519034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.623 [2024-11-27 12:14:29.519167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.623 [2024-11-27 12:14:29.519237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.623 [2024-11-27 12:14:29.519404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.623 [2024-11-27 12:14:29.519475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.623 [2024-11-27 12:14:29.519547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.623 [2024-11-27 12:14:29.519608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.623 [2024-11-27 12:14:29.519618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.623 [2024-11-27 12:14:29.519632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.623 [2024-11-27 12:14:29.519773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.199 ms, result 0 00:30:40.560 00:30:40.560 00:30:40.560 12:14:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:42.466 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:42.466 Process with pid 81119 is not found 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81119 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81119 ']' 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81119 00:30:42.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81119) - No such process 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81119 is not found' 00:30:42.466 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:42.725 Remove shared memory files 00:30:42.725 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:42.725 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:42.725 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:42.985 00:30:42.985 real 3m38.483s 00:30:42.985 user 4m5.054s 00:30:42.985 sys 0m39.306s 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.985 ************************************ 00:30:42.985 END TEST ftl_dirty_shutdown 00:30:42.985 ************************************ 00:30:42.985 12:14:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.985 12:14:32 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:42.985 12:14:32 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:42.985 12:14:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.985 12:14:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:42.985 ************************************ 00:30:42.985 START TEST ftl_upgrade_shutdown 00:30:42.985 ************************************ 00:30:42.985 12:14:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:42.985 * Looking for test storage... 00:30:42.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:42.985 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.985 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.985 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.245 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.246 --rc genhtml_branch_coverage=1 00:30:43.246 --rc genhtml_function_coverage=1 00:30:43.246 --rc genhtml_legend=1 00:30:43.246 --rc geninfo_all_blocks=1 00:30:43.246 --rc geninfo_unexecuted_blocks=1 00:30:43.246 00:30:43.246 ' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.246 --rc genhtml_branch_coverage=1 00:30:43.246 --rc genhtml_function_coverage=1 00:30:43.246 --rc genhtml_legend=1 00:30:43.246 --rc geninfo_all_blocks=1 00:30:43.246 --rc geninfo_unexecuted_blocks=1 00:30:43.246 00:30:43.246 ' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.246 --rc genhtml_branch_coverage=1 00:30:43.246 --rc genhtml_function_coverage=1 00:30:43.246 --rc genhtml_legend=1 00:30:43.246 --rc geninfo_all_blocks=1 00:30:43.246 --rc geninfo_unexecuted_blocks=1 00:30:43.246 00:30:43.246 ' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.246 --rc genhtml_branch_coverage=1 00:30:43.246 --rc genhtml_function_coverage=1 00:30:43.246 --rc genhtml_legend=1 00:30:43.246 --rc geninfo_all_blocks=1 00:30:43.246 --rc geninfo_unexecuted_blocks=1 00:30:43.246 00:30:43.246 ' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:43.246 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83458 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83458 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83458 ']' 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.247 12:14:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:43.247 [2024-11-27 12:14:33.242867] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:43.247 [2024-11-27 12:14:33.242985] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83458 ] 00:30:43.506 [2024-11-27 12:14:33.423039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.506 [2024-11-27 12:14:33.529541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:44.445 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:44.705 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:44.964 { 00:30:44.964 "name": "basen1", 00:30:44.964 "aliases": [ 00:30:44.964 "ee22a38b-91b6-4c74-acb7-debc4e65f636" 00:30:44.964 ], 00:30:44.964 "product_name": "NVMe disk", 00:30:44.964 "block_size": 4096, 00:30:44.964 "num_blocks": 1310720, 00:30:44.964 "uuid": "ee22a38b-91b6-4c74-acb7-debc4e65f636", 00:30:44.964 "numa_id": -1, 00:30:44.964 "assigned_rate_limits": { 00:30:44.964 "rw_ios_per_sec": 0, 00:30:44.964 "rw_mbytes_per_sec": 0, 00:30:44.964 "r_mbytes_per_sec": 0, 00:30:44.964 "w_mbytes_per_sec": 0 00:30:44.964 }, 00:30:44.964 "claimed": true, 00:30:44.964 "claim_type": "read_many_write_one", 00:30:44.964 "zoned": false, 00:30:44.964 "supported_io_types": { 00:30:44.964 "read": true, 00:30:44.964 "write": true, 00:30:44.964 "unmap": true, 00:30:44.964 "flush": true, 00:30:44.964 "reset": true, 00:30:44.964 "nvme_admin": true, 00:30:44.964 "nvme_io": true, 00:30:44.964 "nvme_io_md": false, 00:30:44.964 "write_zeroes": true, 00:30:44.964 "zcopy": false, 00:30:44.964 "get_zone_info": false, 00:30:44.964 "zone_management": false, 00:30:44.964 "zone_append": false, 00:30:44.964 "compare": true, 00:30:44.964 "compare_and_write": false, 00:30:44.964 "abort": true, 00:30:44.964 "seek_hole": false, 00:30:44.964 "seek_data": false, 00:30:44.964 "copy": true, 00:30:44.964 "nvme_iov_md": false 00:30:44.964 }, 00:30:44.964 "driver_specific": { 00:30:44.964 "nvme": [ 00:30:44.964 { 00:30:44.964 "pci_address": "0000:00:11.0", 00:30:44.964 "trid": { 00:30:44.964 "trtype": "PCIe", 00:30:44.964 "traddr": "0000:00:11.0" 00:30:44.964 }, 00:30:44.964 "ctrlr_data": { 00:30:44.964 "cntlid": 0, 00:30:44.964 "vendor_id": "0x1b36", 00:30:44.964 "model_number": "QEMU NVMe Ctrl", 00:30:44.964 "serial_number": "12341", 00:30:44.964 "firmware_revision": "8.0.0", 00:30:44.964 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:44.964 "oacs": { 00:30:44.964 "security": 0, 00:30:44.964 "format": 1, 00:30:44.964 "firmware": 0, 00:30:44.964 "ns_manage": 1 00:30:44.964 }, 00:30:44.964 "multi_ctrlr": false, 00:30:44.964 "ana_reporting": false 00:30:44.964 }, 00:30:44.964 "vs": { 00:30:44.964 "nvme_version": "1.4" 00:30:44.964 }, 00:30:44.964 "ns_data": { 00:30:44.964 "id": 1, 00:30:44.964 "can_share": false 00:30:44.964 } 00:30:44.964 } 00:30:44.964 ], 00:30:44.964 "mp_policy": "active_passive" 00:30:44.964 } 00:30:44.964 } 00:30:44.964 ]' 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:44.964 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:44.965 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:44.965 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:44.965 12:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:45.225 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0989c05b-2f40-461a-9537-7bc5d8471547 00:30:45.225 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:45.225 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0989c05b-2f40-461a-9537-7bc5d8471547 00:30:45.484 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:45.744 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=337b4f46-63dc-4369-871a-88c846e5c43c 00:30:45.744 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 337b4f46-63dc-4369-871a-88c846e5c43c 00:30:46.003 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d80dcab3-1159-4132-922e-ae0efd6500a4 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d80dcab3-1159-4132-922e-ae0efd6500a4 ]] 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d80dcab3-1159-4132-922e-ae0efd6500a4 5120 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d80dcab3-1159-4132-922e-ae0efd6500a4 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d80dcab3-1159-4132-922e-ae0efd6500a4 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d80dcab3-1159-4132-922e-ae0efd6500a4 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:46.004 12:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d80dcab3-1159-4132-922e-ae0efd6500a4 00:30:46.004 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:46.004 { 00:30:46.004 "name": "d80dcab3-1159-4132-922e-ae0efd6500a4", 00:30:46.004 "aliases": [ 00:30:46.004 "lvs/basen1p0" 00:30:46.004 ], 00:30:46.004 "product_name": "Logical Volume", 00:30:46.004 "block_size": 4096, 00:30:46.004 "num_blocks": 5242880, 00:30:46.004 "uuid": "d80dcab3-1159-4132-922e-ae0efd6500a4", 00:30:46.004 "assigned_rate_limits": { 00:30:46.004 "rw_ios_per_sec": 0, 00:30:46.004 "rw_mbytes_per_sec": 0, 00:30:46.004 "r_mbytes_per_sec": 0, 00:30:46.004 "w_mbytes_per_sec": 0 00:30:46.004 }, 00:30:46.004 "claimed": false, 00:30:46.004 "zoned": false, 00:30:46.004 "supported_io_types": { 00:30:46.004 "read": true, 00:30:46.004 "write": true, 00:30:46.004 "unmap": true, 00:30:46.004 "flush": false, 00:30:46.004 "reset": true, 00:30:46.004 "nvme_admin": false, 00:30:46.004 "nvme_io": false, 00:30:46.004 "nvme_io_md": false, 00:30:46.004 "write_zeroes": true, 00:30:46.004 "zcopy": false, 00:30:46.004 "get_zone_info": false, 00:30:46.004 "zone_management": false, 00:30:46.004 "zone_append": false, 00:30:46.004 "compare": false, 00:30:46.004 "compare_and_write": false, 00:30:46.004 "abort": false, 00:30:46.004 "seek_hole": true, 00:30:46.004 "seek_data": true, 00:30:46.004 "copy": false, 00:30:46.004 "nvme_iov_md": false 00:30:46.004 }, 00:30:46.004 "driver_specific": { 00:30:46.004 "lvol": { 00:30:46.004 "lvol_store_uuid": "337b4f46-63dc-4369-871a-88c846e5c43c", 00:30:46.004 "base_bdev": "basen1", 00:30:46.004 "thin_provision": true, 00:30:46.004 "num_allocated_clusters": 0, 00:30:46.004 "snapshot": false, 00:30:46.004 "clone": false, 00:30:46.004 "esnap_clone": false 00:30:46.004 } 00:30:46.004 } 00:30:46.004 } 00:30:46.004 ]' 00:30:46.004 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:46.263 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:46.522 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:46.523 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:46.523 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:46.783 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:46.783 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:46.783 12:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d80dcab3-1159-4132-922e-ae0efd6500a4 -c cachen1p0 --l2p_dram_limit 2 00:30:46.783 [2024-11-27 12:14:36.757971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.758213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:46.783 [2024-11-27 12:14:36.758250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:46.783 [2024-11-27 12:14:36.758264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.758345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.758378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:46.783 [2024-11-27 12:14:36.758397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:46.783 [2024-11-27 12:14:36.758410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.758441] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:46.783 [2024-11-27 12:14:36.759413] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:46.783 [2024-11-27 12:14:36.759448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.759461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:46.783 [2024-11-27 12:14:36.759478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.011 ms 00:30:46.783 [2024-11-27 12:14:36.759491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.759578] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f988fc2c-fca4-4808-9592-ac6190986ab8 00:30:46.783 [2024-11-27 12:14:36.762091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.762140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:46.783 [2024-11-27 12:14:36.762156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:46.783 [2024-11-27 12:14:36.762173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.776215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.776257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:46.783 [2024-11-27 12:14:36.776282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.997 ms 00:30:46.783 [2024-11-27 12:14:36.776298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.776350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.776384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:46.783 [2024-11-27 12:14:36.776398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:46.783 [2024-11-27 12:14:36.776417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.776476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.776516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:46.783 [2024-11-27 12:14:36.776537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:46.783 [2024-11-27 12:14:36.776553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.776581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:46.783 [2024-11-27 12:14:36.783022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.783060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:46.783 [2024-11-27 12:14:36.783081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.454 ms 00:30:46.783 [2024-11-27 12:14:36.783093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.783130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.783142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:46.783 [2024-11-27 12:14:36.783158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:46.783 [2024-11-27 12:14:36.783170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.783214] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:46.783 [2024-11-27 12:14:36.783346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:46.783 [2024-11-27 12:14:36.783389] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:46.783 [2024-11-27 12:14:36.783405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:46.783 [2024-11-27 12:14:36.783424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:46.783 [2024-11-27 12:14:36.783438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:46.783 [2024-11-27 12:14:36.783455] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:46.783 [2024-11-27 12:14:36.783471] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:46.783 [2024-11-27 12:14:36.783486] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:46.783 [2024-11-27 12:14:36.783498] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:46.783 [2024-11-27 12:14:36.783513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.783525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:46.783 [2024-11-27 12:14:36.783541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:30:46.783 [2024-11-27 12:14:36.783553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.783629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.783 [2024-11-27 12:14:36.783656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:46.783 [2024-11-27 12:14:36.783672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:30:46.783 [2024-11-27 12:14:36.783684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.783 [2024-11-27 12:14:36.783786] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:46.783 [2024-11-27 12:14:36.783802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:46.783 [2024-11-27 12:14:36.783818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:46.783 [2024-11-27 12:14:36.783831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.783 [2024-11-27 12:14:36.783846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:46.783 [2024-11-27 12:14:36.783858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:46.783 [2024-11-27 12:14:36.783872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:46.783 [2024-11-27 12:14:36.783884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:46.783 [2024-11-27 12:14:36.783898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:46.783 [2024-11-27 12:14:36.783909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.783 [2024-11-27 12:14:36.783924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:46.784 [2024-11-27 12:14:36.783936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:46.784 [2024-11-27 12:14:36.783950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.783962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:46.784 [2024-11-27 12:14:36.783977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:46.784 [2024-11-27 12:14:36.783988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:46.784 [2024-11-27 12:14:36.784015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:46.784 [2024-11-27 12:14:36.784031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:46.784 [2024-11-27 12:14:36.784057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:46.784 [2024-11-27 12:14:36.784075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:46.784 [2024-11-27 12:14:36.784101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:46.784 [2024-11-27 12:14:36.784115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:46.784 [2024-11-27 12:14:36.784140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:46.784 [2024-11-27 12:14:36.784150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:46.784 [2024-11-27 12:14:36.784175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:46.784 [2024-11-27 12:14:36.784189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:46.784 [2024-11-27 12:14:36.784221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:46.784 [2024-11-27 12:14:36.784232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:46.784 [2024-11-27 12:14:36.784260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:46.784 [2024-11-27 12:14:36.784305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:46.784 [2024-11-27 12:14:36.784342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:46.784 [2024-11-27 12:14:36.784604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:46.784 [2024-11-27 12:14:36.784706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:46.784 [2024-11-27 12:14:36.784743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:46.784 [2024-11-27 12:14:36.784788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:46.784 [2024-11-27 12:14:36.784882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:46.784 [2024-11-27 12:14:36.784930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:46.784 [2024-11-27 12:14:36.784965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:46.784 [2024-11-27 12:14:36.785004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:46.784 [2024-11-27 12:14:36.785086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:46.784 [2024-11-27 12:14:36.785132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:46.784 [2024-11-27 12:14:36.785176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:46.784 [2024-11-27 12:14:36.785295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.785354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:46.784 [2024-11-27 12:14:36.785515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.785573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.785688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:46.784 [2024-11-27 12:14:36.785764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:46.784 [2024-11-27 12:14:36.785864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:46.784 [2024-11-27 12:14:36.785921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:46.784 [2024-11-27 12:14:36.786018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:46.784 [2024-11-27 12:14:36.786441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:46.784 [2024-11-27 12:14:36.786459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:46.784 [2024-11-27 12:14:36.786501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:46.784 [2024-11-27 12:14:36.786515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:46.784 [2024-11-27 12:14:36.786531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:46.784 [2024-11-27 12:14:36.786546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:46.784 [2024-11-27 12:14:36.786563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:46.784 [2024-11-27 12:14:36.786577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.822 ms 00:30:46.784 [2024-11-27 12:14:36.786593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:46.784 [2024-11-27 12:14:36.786670] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:46.784 [2024-11-27 12:14:36.786693] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:50.978 [2024-11-27 12:14:40.289193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.978 [2024-11-27 12:14:40.289477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:50.978 [2024-11-27 12:14:40.289578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3508.207 ms 00:30:50.978 [2024-11-27 12:14:40.289629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.336645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.336883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:50.979 [2024-11-27 12:14:40.337032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.729 ms 00:30:50.979 [2024-11-27 12:14:40.337082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.337216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.337399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:50.979 [2024-11-27 12:14:40.337499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:50.979 [2024-11-27 12:14:40.337555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.390525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.390717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:50.979 [2024-11-27 12:14:40.390827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.947 ms 00:30:50.979 [2024-11-27 12:14:40.390876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.390945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.390985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:50.979 [2024-11-27 12:14:40.391086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:50.979 [2024-11-27 12:14:40.391130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.392006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.392168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:50.979 [2024-11-27 12:14:40.392296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.785 ms 00:30:50.979 [2024-11-27 12:14:40.392343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.392442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.392602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:50.979 [2024-11-27 12:14:40.392646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:50.979 [2024-11-27 12:14:40.392690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.416120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.416304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:50.979 [2024-11-27 12:14:40.416418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.412 ms 00:30:50.979 [2024-11-27 12:14:40.416467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.439692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:50.979 [2024-11-27 12:14:40.441723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.441883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:50.979 [2024-11-27 12:14:40.442124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.161 ms 00:30:50.979 [2024-11-27 12:14:40.442211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.475773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.475927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:50.979 [2024-11-27 12:14:40.476048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.545 ms 00:30:50.979 [2024-11-27 12:14:40.476089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.476219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.476263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:50.979 [2024-11-27 12:14:40.476380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:30:50.979 [2024-11-27 12:14:40.476425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.509729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.509897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:50.979 [2024-11-27 12:14:40.510023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.265 ms 00:30:50.979 [2024-11-27 12:14:40.510042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.544125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.544168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:50.979 [2024-11-27 12:14:40.544187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.081 ms 00:30:50.979 [2024-11-27 12:14:40.544199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.544871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.544906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:50.979 [2024-11-27 12:14:40.544929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:30:50.979 [2024-11-27 12:14:40.544942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.644514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.644557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:50.979 [2024-11-27 12:14:40.644582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.651 ms 00:30:50.979 [2024-11-27 12:14:40.644594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.680785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.680828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:50.979 [2024-11-27 12:14:40.680849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.137 ms 00:30:50.979 [2024-11-27 12:14:40.680861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.714639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.714681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:50.979 [2024-11-27 12:14:40.714700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.781 ms 00:30:50.979 [2024-11-27 12:14:40.714712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.748195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.748236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:50.979 [2024-11-27 12:14:40.748256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.485 ms 00:30:50.979 [2024-11-27 12:14:40.748268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.748323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.748337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:50.979 [2024-11-27 12:14:40.748370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:50.979 [2024-11-27 12:14:40.748384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.748529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.979 [2024-11-27 12:14:40.748547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:50.979 [2024-11-27 12:14:40.748564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:50.979 [2024-11-27 12:14:40.748575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.979 [2024-11-27 12:14:40.750009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3997.994 ms, result 0 00:30:50.979 { 00:30:50.979 "name": "ftl", 00:30:50.979 "uuid": "f988fc2c-fca4-4808-9592-ac6190986ab8" 00:30:50.979 } 00:30:50.979 12:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:50.979 [2024-11-27 12:14:40.964429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.979 12:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:51.239 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:51.498 [2024-11-27 12:14:41.364069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:51.498 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:51.498 [2024-11-27 12:14:41.541004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:51.757 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:52.017 Fill FTL, iteration 1 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83580 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83580 /var/tmp/spdk.tgt.sock 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83580 ']' 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.017 12:14:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:52.017 [2024-11-27 12:14:42.014129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:52.017 [2024-11-27 12:14:42.014473] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83580 ] 00:30:52.276 [2024-11-27 12:14:42.192460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.276 [2024-11-27 12:14:42.301431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.213 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.213 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:53.214 12:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:53.473 ftln1 00:30:53.473 12:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:53.473 12:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83580 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83580 ']' 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83580 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83580 00:30:53.732 killing process with pid 83580 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83580' 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83580 00:30:53.732 12:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83580 00:30:56.269 12:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:56.269 12:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:56.269 [2024-11-27 12:14:45.915971] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:56.269 [2024-11-27 12:14:45.916085] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83634 ] 00:30:56.269 [2024-11-27 12:14:46.091967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.270 [2024-11-27 12:14:46.196660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.649  [2024-11-27T12:14:48.640Z] Copying: 251/1024 [MB] (251 MBps) [2024-11-27T12:14:50.017Z] Copying: 498/1024 [MB] (247 MBps) [2024-11-27T12:14:50.954Z] Copying: 748/1024 [MB] (250 MBps) [2024-11-27T12:14:50.954Z] Copying: 998/1024 [MB] (250 MBps) [2024-11-27T12:14:51.891Z] Copying: 1024/1024 [MB] (average 249 MBps) 00:31:01.838 00:31:01.838 Calculate MD5 checksum, iteration 1 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:01.838 12:14:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:02.097 [2024-11-27 12:14:51.936125] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:02.097 [2024-11-27 12:14:51.936466] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83701 ] 00:31:02.097 [2024-11-27 12:14:52.113784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.356 [2024-11-27 12:14:52.220925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.733  [2024-11-27T12:14:54.723Z] Copying: 575/1024 [MB] (575 MBps) [2024-11-27T12:14:55.364Z] Copying: 1024/1024 [MB] (average 571 MBps) 00:31:05.311 00:31:05.601 12:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:05.601 12:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:06.979 Fill FTL, iteration 2 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=aa11631852852a687d27294306e1d068 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:06.979 12:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:07.239 [2024-11-27 12:14:57.110630] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:07.239 [2024-11-27 12:14:57.110958] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83757 ] 00:31:07.498 [2024-11-27 12:14:57.292173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.498 [2024-11-27 12:14:57.397496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.877  [2024-11-27T12:14:59.867Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-27T12:15:01.246Z] Copying: 489/1024 [MB] (245 MBps) [2024-11-27T12:15:02.184Z] Copying: 731/1024 [MB] (242 MBps) [2024-11-27T12:15:02.184Z] Copying: 975/1024 [MB] (244 MBps) [2024-11-27T12:15:03.563Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:31:13.510 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:13.510 Calculate MD5 checksum, iteration 2 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:13.510 12:15:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:13.510 [2024-11-27 12:15:03.295147] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:13.510 [2024-11-27 12:15:03.295672] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83821 ] 00:31:13.510 [2024-11-27 12:15:03.474907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.769 [2024-11-27 12:15:03.581798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.673  [2024-11-27T12:15:06.293Z] Copying: 569/1024 [MB] (569 MBps) [2024-11-27T12:15:07.673Z] Copying: 1024/1024 [MB] (average 572 MBps) 00:31:17.620 00:31:17.620 12:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:17.620 12:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:19.000 12:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:19.000 12:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2b425c2862bd36bf7f43135735ed0dd0 00:31:19.000 12:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:19.000 12:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:19.000 12:15:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:19.260 [2024-11-27 12:15:09.077937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.260 [2024-11-27 12:15:09.078008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:19.260 [2024-11-27 12:15:09.078027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:19.260 [2024-11-27 12:15:09.078039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.260 [2024-11-27 12:15:09.078066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.260 [2024-11-27 12:15:09.078084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:19.260 [2024-11-27 12:15:09.078096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.260 [2024-11-27 12:15:09.078108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.260 [2024-11-27 12:15:09.078131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.260 [2024-11-27 12:15:09.078143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:19.260 [2024-11-27 12:15:09.078156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.260 [2024-11-27 12:15:09.078167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.260 [2024-11-27 12:15:09.078244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.290 ms, result 0 00:31:19.260 true 00:31:19.260 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.260 { 00:31:19.260 "name": "ftl", 00:31:19.260 "properties": [ 00:31:19.260 { 00:31:19.260 "name": "superblock_version", 00:31:19.260 "value": 5, 00:31:19.260 "read-only": true 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "name": "base_device", 00:31:19.260 "bands": [ 00:31:19.260 { 00:31:19.260 "id": 0, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 1, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 2, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 3, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 4, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 5, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 6, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 7, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 8, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 9, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 10, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.260 "id": 11, 00:31:19.260 "state": "FREE", 00:31:19.260 "validity": 0.0 00:31:19.260 }, 00:31:19.260 { 00:31:19.261 "id": 12, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 13, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 14, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 15, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 16, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 17, 00:31:19.261 "state": "FREE", 00:31:19.261 "validity": 0.0 00:31:19.261 } 00:31:19.261 ], 00:31:19.261 "read-only": true 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "name": "cache_device", 00:31:19.261 "type": "bdev", 00:31:19.261 "chunks": [ 00:31:19.261 { 00:31:19.261 "id": 0, 00:31:19.261 "state": "INACTIVE", 00:31:19.261 "utilization": 0.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 1, 00:31:19.261 "state": "CLOSED", 00:31:19.261 "utilization": 1.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 2, 00:31:19.261 "state": "CLOSED", 00:31:19.261 "utilization": 1.0 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 3, 00:31:19.261 "state": "OPEN", 00:31:19.261 "utilization": 0.001953125 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "id": 4, 00:31:19.261 "state": "OPEN", 00:31:19.261 "utilization": 0.0 00:31:19.261 } 00:31:19.261 ], 00:31:19.261 "read-only": true 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "name": "verbose_mode", 00:31:19.261 "value": true, 00:31:19.261 "unit": "", 00:31:19.261 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:19.261 }, 00:31:19.261 { 00:31:19.261 "name": "prep_upgrade_on_shutdown", 00:31:19.261 "value": false, 00:31:19.261 "unit": "", 00:31:19.261 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:19.261 } 00:31:19.261 ] 00:31:19.261 } 00:31:19.520 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:19.520 [2024-11-27 12:15:09.501869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.520 [2024-11-27 12:15:09.502101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:19.520 [2024-11-27 12:15:09.502239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:19.520 [2024-11-27 12:15:09.502258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.520 [2024-11-27 12:15:09.502299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.520 [2024-11-27 12:15:09.502312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:19.520 [2024-11-27 12:15:09.502325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:19.520 [2024-11-27 12:15:09.502336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.520 [2024-11-27 12:15:09.502385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.520 [2024-11-27 12:15:09.502398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:19.520 [2024-11-27 12:15:09.502410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.520 [2024-11-27 12:15:09.502422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.520 [2024-11-27 12:15:09.502492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.600 ms, result 0 00:31:19.520 true 00:31:19.520 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:19.520 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:19.520 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.779 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:19.779 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:19.779 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:20.038 [2024-11-27 12:15:09.933827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.038 [2024-11-27 12:15:09.933987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:20.038 [2024-11-27 12:15:09.934098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:20.038 [2024-11-27 12:15:09.934138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.038 [2024-11-27 12:15:09.934195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.038 [2024-11-27 12:15:09.934230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:20.038 [2024-11-27 12:15:09.934262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:20.038 [2024-11-27 12:15:09.934294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.038 [2024-11-27 12:15:09.934338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.038 [2024-11-27 12:15:09.934518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:20.038 [2024-11-27 12:15:09.934568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:20.038 [2024-11-27 12:15:09.934599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.038 [2024-11-27 12:15:09.934682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.842 ms, result 0 00:31:20.038 true 00:31:20.038 12:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:20.298 { 00:31:20.298 "name": "ftl", 00:31:20.298 "properties": [ 00:31:20.298 { 00:31:20.298 "name": "superblock_version", 00:31:20.298 "value": 5, 00:31:20.298 "read-only": true 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "name": "base_device", 00:31:20.298 "bands": [ 00:31:20.298 { 00:31:20.298 "id": 0, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 1, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 2, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 3, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 4, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 5, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 6, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 7, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 8, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 9, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 10, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 11, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 12, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 13, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 14, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 15, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 16, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 17, 00:31:20.298 "state": "FREE", 00:31:20.298 "validity": 0.0 00:31:20.298 } 00:31:20.298 ], 00:31:20.298 "read-only": true 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "name": "cache_device", 00:31:20.298 "type": "bdev", 00:31:20.298 "chunks": [ 00:31:20.298 { 00:31:20.298 "id": 0, 00:31:20.298 "state": "INACTIVE", 00:31:20.298 "utilization": 0.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 1, 00:31:20.298 "state": "CLOSED", 00:31:20.298 "utilization": 1.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 2, 00:31:20.298 "state": "CLOSED", 00:31:20.298 "utilization": 1.0 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 3, 00:31:20.298 "state": "OPEN", 00:31:20.298 "utilization": 0.001953125 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "id": 4, 00:31:20.298 "state": "OPEN", 00:31:20.298 "utilization": 0.0 00:31:20.298 } 00:31:20.298 ], 00:31:20.298 "read-only": true 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "name": "verbose_mode", 00:31:20.298 "value": true, 00:31:20.298 "unit": "", 00:31:20.298 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:20.298 }, 00:31:20.298 { 00:31:20.298 "name": "prep_upgrade_on_shutdown", 00:31:20.298 "value": true, 00:31:20.298 "unit": "", 00:31:20.298 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:20.298 } 00:31:20.298 ] 00:31:20.298 } 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83458 ]] 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83458 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83458 ']' 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83458 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83458 00:31:20.298 killing process with pid 83458 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83458' 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83458 00:31:20.298 12:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83458 00:31:21.677 [2024-11-27 12:15:11.377120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:21.677 [2024-11-27 12:15:11.397956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.677 [2024-11-27 12:15:11.398006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:21.677 [2024-11-27 12:15:11.398025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:21.677 [2024-11-27 12:15:11.398038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.677 [2024-11-27 12:15:11.398066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:21.677 [2024-11-27 12:15:11.402458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.677 [2024-11-27 12:15:11.402495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:21.677 [2024-11-27 12:15:11.402509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.380 ms 00:31:21.677 [2024-11-27 12:15:11.402529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.243 [2024-11-27 12:15:18.241011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.243 [2024-11-27 12:15:18.241090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:28.243 [2024-11-27 12:15:18.241118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6849.544 ms 00:31:28.244 [2024-11-27 12:15:18.241132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.242279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.242322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:28.244 [2024-11-27 12:15:18.242337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.116 ms 00:31:28.244 [2024-11-27 12:15:18.242349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.243214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.243244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:28.244 [2024-11-27 12:15:18.243260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.819 ms 00:31:28.244 [2024-11-27 12:15:18.243282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.257998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.258040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:28.244 [2024-11-27 12:15:18.258055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.697 ms 00:31:28.244 [2024-11-27 12:15:18.258067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.266952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.266993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:28.244 [2024-11-27 12:15:18.267009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.857 ms 00:31:28.244 [2024-11-27 12:15:18.267022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.267109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.267132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:28.244 [2024-11-27 12:15:18.267145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:28.244 [2024-11-27 12:15:18.267157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.244 [2024-11-27 12:15:18.280966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.244 [2024-11-27 12:15:18.281184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:28.244 [2024-11-27 12:15:18.281210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.810 ms 00:31:28.244 [2024-11-27 12:15:18.281222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.503 [2024-11-27 12:15:18.295572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.503 [2024-11-27 12:15:18.295749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:28.503 [2024-11-27 12:15:18.295773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.298 ms 00:31:28.503 [2024-11-27 12:15:18.295785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.503 [2024-11-27 12:15:18.309941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.503 [2024-11-27 12:15:18.309979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:28.503 [2024-11-27 12:15:18.309993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.136 ms 00:31:28.503 [2024-11-27 12:15:18.310004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.503 [2024-11-27 12:15:18.324048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.503 [2024-11-27 12:15:18.324086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:28.503 [2024-11-27 12:15:18.324100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.977 ms 00:31:28.503 [2024-11-27 12:15:18.324111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.503 [2024-11-27 12:15:18.324149] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:28.503 [2024-11-27 12:15:18.324183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:28.503 [2024-11-27 12:15:18.324197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:28.503 [2024-11-27 12:15:18.324209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:28.503 [2024-11-27 12:15:18.324223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:28.503 [2024-11-27 12:15:18.324406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:28.504 [2024-11-27 12:15:18.324421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:28.504 [2024-11-27 12:15:18.324435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f988fc2c-fca4-4808-9592-ac6190986ab8 00:31:28.504 [2024-11-27 12:15:18.324447] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:28.504 [2024-11-27 12:15:18.324459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:28.504 [2024-11-27 12:15:18.324470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:28.504 [2024-11-27 12:15:18.324481] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:28.504 [2024-11-27 12:15:18.324500] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:28.504 [2024-11-27 12:15:18.324514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:28.504 [2024-11-27 12:15:18.324531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:28.504 [2024-11-27 12:15:18.324542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:28.504 [2024-11-27 12:15:18.324553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:28.504 [2024-11-27 12:15:18.324565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.504 [2024-11-27 12:15:18.324577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:28.504 [2024-11-27 12:15:18.324589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:31:28.504 [2024-11-27 12:15:18.324602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.350394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.504 [2024-11-27 12:15:18.350598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:28.504 [2024-11-27 12:15:18.350634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.797 ms 00:31:28.504 [2024-11-27 12:15:18.350648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.351266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.504 [2024-11-27 12:15:18.351281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:28.504 [2024-11-27 12:15:18.351295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:31:28.504 [2024-11-27 12:15:18.351309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.415940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.504 [2024-11-27 12:15:18.415990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:28.504 [2024-11-27 12:15:18.416005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.504 [2024-11-27 12:15:18.416017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.416058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.504 [2024-11-27 12:15:18.416071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:28.504 [2024-11-27 12:15:18.416084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.504 [2024-11-27 12:15:18.416096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.416210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.504 [2024-11-27 12:15:18.416226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:28.504 [2024-11-27 12:15:18.416245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.504 [2024-11-27 12:15:18.416257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.416278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.504 [2024-11-27 12:15:18.416291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:28.504 [2024-11-27 12:15:18.416302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.504 [2024-11-27 12:15:18.416314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.504 [2024-11-27 12:15:18.540686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.504 [2024-11-27 12:15:18.540914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:28.504 [2024-11-27 12:15:18.540951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.504 [2024-11-27 12:15:18.540964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:28.764 [2024-11-27 12:15:18.641104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:28.764 [2024-11-27 12:15:18.641296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:28.764 [2024-11-27 12:15:18.641429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:28.764 [2024-11-27 12:15:18.641600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:28.764 [2024-11-27 12:15:18.641705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:28.764 [2024-11-27 12:15:18.641798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.641872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:28.764 [2024-11-27 12:15:18.641887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:28.764 [2024-11-27 12:15:18.641899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:28.764 [2024-11-27 12:15:18.641911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.764 [2024-11-27 12:15:18.642073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7255.838 ms, result 0 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84013 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84013 00:31:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84013 ']' 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.974 12:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:32.974 [2024-11-27 12:15:22.486344] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:32.974 [2024-11-27 12:15:22.486531] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84013 ] 00:31:32.974 [2024-11-27 12:15:22.675818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.974 [2024-11-27 12:15:22.805225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.911 [2024-11-27 12:15:23.881810] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:33.911 [2024-11-27 12:15:23.881894] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:34.171 [2024-11-27 12:15:24.030313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.171 [2024-11-27 12:15:24.030377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:34.171 [2024-11-27 12:15:24.030397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:34.171 [2024-11-27 12:15:24.030409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.171 [2024-11-27 12:15:24.030484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.171 [2024-11-27 12:15:24.030499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:34.171 [2024-11-27 12:15:24.030513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:34.172 [2024-11-27 12:15:24.030525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.030553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:34.172 [2024-11-27 12:15:24.031584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:34.172 [2024-11-27 12:15:24.031835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.031855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:34.172 [2024-11-27 12:15:24.031868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.288 ms 00:31:34.172 [2024-11-27 12:15:24.031880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.034461] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:34.172 [2024-11-27 12:15:24.053374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.053586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:34.172 [2024-11-27 12:15:24.053611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.944 ms 00:31:34.172 [2024-11-27 12:15:24.053624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.053729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.053747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:34.172 [2024-11-27 12:15:24.053761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:34.172 [2024-11-27 12:15:24.053772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.066543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.066576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:34.172 [2024-11-27 12:15:24.066591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.696 ms 00:31:34.172 [2024-11-27 12:15:24.066603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.066681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.066697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:34.172 [2024-11-27 12:15:24.066710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:31:34.172 [2024-11-27 12:15:24.066722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.066789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.066809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:34.172 [2024-11-27 12:15:24.066821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:34.172 [2024-11-27 12:15:24.066833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.066865] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:34.172 [2024-11-27 12:15:24.072310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.072348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:34.172 [2024-11-27 12:15:24.072386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.461 ms 00:31:34.172 [2024-11-27 12:15:24.072397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.072431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.072443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:34.172 [2024-11-27 12:15:24.072456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:34.172 [2024-11-27 12:15:24.072468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.072533] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:34.172 [2024-11-27 12:15:24.072569] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:34.172 [2024-11-27 12:15:24.072608] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:34.172 [2024-11-27 12:15:24.072628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:34.172 [2024-11-27 12:15:24.072723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:34.172 [2024-11-27 12:15:24.072740] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:34.172 [2024-11-27 12:15:24.072755] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:34.172 [2024-11-27 12:15:24.072769] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:34.172 [2024-11-27 12:15:24.072788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:34.172 [2024-11-27 12:15:24.072801] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:34.172 [2024-11-27 12:15:24.072812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:34.172 [2024-11-27 12:15:24.072823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:34.172 [2024-11-27 12:15:24.072835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:34.172 [2024-11-27 12:15:24.072847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.072858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:34.172 [2024-11-27 12:15:24.072871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:31:34.172 [2024-11-27 12:15:24.072881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.072959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.172 [2024-11-27 12:15:24.072972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:34.172 [2024-11-27 12:15:24.072989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:34.172 [2024-11-27 12:15:24.073000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.172 [2024-11-27 12:15:24.073094] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:34.172 [2024-11-27 12:15:24.073117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:34.172 [2024-11-27 12:15:24.073129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:34.172 [2024-11-27 12:15:24.073163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:34.172 [2024-11-27 12:15:24.073186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:34.172 [2024-11-27 12:15:24.073197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:34.172 [2024-11-27 12:15:24.073208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:34.172 [2024-11-27 12:15:24.073230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:34.172 [2024-11-27 12:15:24.073241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:34.172 [2024-11-27 12:15:24.073262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:34.172 [2024-11-27 12:15:24.073273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:34.172 [2024-11-27 12:15:24.073293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:34.172 [2024-11-27 12:15:24.073304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:34.172 [2024-11-27 12:15:24.073325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:34.172 [2024-11-27 12:15:24.073335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:34.172 [2024-11-27 12:15:24.073383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:34.172 [2024-11-27 12:15:24.073394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:34.172 [2024-11-27 12:15:24.073416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:34.172 [2024-11-27 12:15:24.073428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:34.172 [2024-11-27 12:15:24.073449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:34.172 [2024-11-27 12:15:24.073459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:34.172 [2024-11-27 12:15:24.073481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:34.172 [2024-11-27 12:15:24.073492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:34.172 [2024-11-27 12:15:24.073513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:34.172 [2024-11-27 12:15:24.073547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:34.172 [2024-11-27 12:15:24.073577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:34.172 [2024-11-27 12:15:24.073588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.172 [2024-11-27 12:15:24.073600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:34.172 [2024-11-27 12:15:24.073613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:34.172 [2024-11-27 12:15:24.073625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:34.172 [2024-11-27 12:15:24.073640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.173 [2024-11-27 12:15:24.073652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:34.173 [2024-11-27 12:15:24.073663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:34.173 [2024-11-27 12:15:24.073684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:34.173 [2024-11-27 12:15:24.073695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:34.173 [2024-11-27 12:15:24.073706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:34.173 [2024-11-27 12:15:24.073717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:34.173 [2024-11-27 12:15:24.073730] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:34.173 [2024-11-27 12:15:24.073743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:34.173 [2024-11-27 12:15:24.073768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:34.173 [2024-11-27 12:15:24.073804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:34.173 [2024-11-27 12:15:24.073816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:34.173 [2024-11-27 12:15:24.073828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:34.173 [2024-11-27 12:15:24.073839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:34.173 [2024-11-27 12:15:24.073921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:34.173 [2024-11-27 12:15:24.073933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:34.173 [2024-11-27 12:15:24.073957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:34.173 [2024-11-27 12:15:24.073968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:34.173 [2024-11-27 12:15:24.073979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:34.173 [2024-11-27 12:15:24.073994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.173 [2024-11-27 12:15:24.074006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:34.173 [2024-11-27 12:15:24.074019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 00:31:34.173 [2024-11-27 12:15:24.074030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.173 [2024-11-27 12:15:24.074085] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:34.173 [2024-11-27 12:15:24.074105] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:37.466 [2024-11-27 12:15:27.506015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.466 [2024-11-27 12:15:27.506214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:37.466 [2024-11-27 12:15:27.506238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3437.503 ms 00:31:37.466 [2024-11-27 12:15:27.506253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.725 [2024-11-27 12:15:27.555426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.555469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:37.726 [2024-11-27 12:15:27.555486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.882 ms 00:31:37.726 [2024-11-27 12:15:27.555499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.555618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.555634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:37.726 [2024-11-27 12:15:27.555648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:37.726 [2024-11-27 12:15:27.555660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.610691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.610735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:37.726 [2024-11-27 12:15:27.610756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.055 ms 00:31:37.726 [2024-11-27 12:15:27.610768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.610818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.610831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:37.726 [2024-11-27 12:15:27.610844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:37.726 [2024-11-27 12:15:27.610857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.611714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.611734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:37.726 [2024-11-27 12:15:27.611748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:31:37.726 [2024-11-27 12:15:27.611766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.611821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.611834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:37.726 [2024-11-27 12:15:27.611846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:37.726 [2024-11-27 12:15:27.611858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.638988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.639241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:37.726 [2024-11-27 12:15:27.639265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.147 ms 00:31:37.726 [2024-11-27 12:15:27.639279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.687921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:37.726 [2024-11-27 12:15:27.687970] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:37.726 [2024-11-27 12:15:27.687992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.688005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:37.726 [2024-11-27 12:15:27.688020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.622 ms 00:31:37.726 [2024-11-27 12:15:27.688032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.707728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.707773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:37.726 [2024-11-27 12:15:27.707790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.670 ms 00:31:37.726 [2024-11-27 12:15:27.707802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.724717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.724770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:37.726 [2024-11-27 12:15:27.724786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.883 ms 00:31:37.726 [2024-11-27 12:15:27.724798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.742060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.742101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:37.726 [2024-11-27 12:15:27.742115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.240 ms 00:31:37.726 [2024-11-27 12:15:27.742127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.726 [2024-11-27 12:15:27.742922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.726 [2024-11-27 12:15:27.742950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:37.726 [2024-11-27 12:15:27.742965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:31:37.726 [2024-11-27 12:15:27.742977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.838497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.838763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:37.986 [2024-11-27 12:15:27.838791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.643 ms 00:31:37.986 [2024-11-27 12:15:27.838806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.848745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:37.986 [2024-11-27 12:15:27.849548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.849629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:37.986 [2024-11-27 12:15:27.849648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.708 ms 00:31:37.986 [2024-11-27 12:15:27.849662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.849754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.849775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:37.986 [2024-11-27 12:15:27.849789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:37.986 [2024-11-27 12:15:27.849801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.849890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.849905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:37.986 [2024-11-27 12:15:27.849918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:37.986 [2024-11-27 12:15:27.849929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.849963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.849977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:37.986 [2024-11-27 12:15:27.849995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:37.986 [2024-11-27 12:15:27.850007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.850081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:37.986 [2024-11-27 12:15:27.850100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.850112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:37.986 [2024-11-27 12:15:27.850125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:37.986 [2024-11-27 12:15:27.850139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.884134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.884187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:37.986 [2024-11-27 12:15:27.884203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.020 ms 00:31:37.986 [2024-11-27 12:15:27.884215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.884310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.986 [2024-11-27 12:15:27.884325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:37.986 [2024-11-27 12:15:27.884338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:37.986 [2024-11-27 12:15:27.884351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.986 [2024-11-27 12:15:27.886040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3861.413 ms, result 0 00:31:37.986 [2024-11-27 12:15:27.900529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.986 [2024-11-27 12:15:27.916519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:37.986 [2024-11-27 12:15:27.925349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:38.555 12:15:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.555 12:15:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:38.555 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:38.555 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:38.555 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:38.815 [2024-11-27 12:15:28.636530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.815 [2024-11-27 12:15:28.636575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:38.815 [2024-11-27 12:15:28.636597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:38.815 [2024-11-27 12:15:28.636609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.815 [2024-11-27 12:15:28.636636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.815 [2024-11-27 12:15:28.636649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:38.815 [2024-11-27 12:15:28.636661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:38.815 [2024-11-27 12:15:28.636673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.815 [2024-11-27 12:15:28.636696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.815 [2024-11-27 12:15:28.636709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:38.815 [2024-11-27 12:15:28.636721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:38.815 [2024-11-27 12:15:28.636733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.815 [2024-11-27 12:15:28.636795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:31:38.815 true 00:31:38.815 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:38.815 { 00:31:38.815 "name": "ftl", 00:31:38.815 "properties": [ 00:31:38.815 { 00:31:38.815 "name": "superblock_version", 00:31:38.815 "value": 5, 00:31:38.815 "read-only": true 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "name": "base_device", 00:31:38.815 "bands": [ 00:31:38.815 { 00:31:38.815 "id": 0, 00:31:38.815 "state": "CLOSED", 00:31:38.815 "validity": 1.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 1, 00:31:38.815 "state": "CLOSED", 00:31:38.815 "validity": 1.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 2, 00:31:38.815 "state": "CLOSED", 00:31:38.815 "validity": 0.007843137254901933 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 3, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 4, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 5, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 6, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 7, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 8, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 9, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 10, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 11, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 12, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 13, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 14, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 15, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 16, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 17, 00:31:38.815 "state": "FREE", 00:31:38.815 "validity": 0.0 00:31:38.815 } 00:31:38.815 ], 00:31:38.815 "read-only": true 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "name": "cache_device", 00:31:38.815 "type": "bdev", 00:31:38.815 "chunks": [ 00:31:38.815 { 00:31:38.815 "id": 0, 00:31:38.815 "state": "INACTIVE", 00:31:38.815 "utilization": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 1, 00:31:38.815 "state": "OPEN", 00:31:38.815 "utilization": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 2, 00:31:38.815 "state": "OPEN", 00:31:38.815 "utilization": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 3, 00:31:38.815 "state": "FREE", 00:31:38.815 "utilization": 0.0 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "id": 4, 00:31:38.815 "state": "FREE", 00:31:38.815 "utilization": 0.0 00:31:38.815 } 00:31:38.815 ], 00:31:38.815 "read-only": true 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "name": "verbose_mode", 00:31:38.815 "value": true, 00:31:38.815 "unit": "", 00:31:38.815 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:38.815 }, 00:31:38.815 { 00:31:38.815 "name": "prep_upgrade_on_shutdown", 00:31:38.815 "value": false, 00:31:38.815 "unit": "", 00:31:38.815 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:38.815 } 00:31:38.815 ] 00:31:38.815 } 00:31:39.076 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:39.076 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.076 12:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:39.076 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:39.076 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:39.076 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:39.076 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.076 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:39.334 Validate MD5 checksum, iteration 1 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:39.334 12:15:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:39.334 [2024-11-27 12:15:29.378877] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:39.334 [2024-11-27 12:15:29.379016] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84097 ] 00:31:39.621 [2024-11-27 12:15:29.558081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.621 [2024-11-27 12:15:29.667664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.534  [2024-11-27T12:15:32.155Z] Copying: 574/1024 [MB] (574 MBps) [2024-11-27T12:15:34.066Z] Copying: 1024/1024 [MB] (average 575 MBps) 00:31:44.013 00:31:44.013 12:15:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:44.013 12:15:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:45.392 Validate MD5 checksum, iteration 2 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=aa11631852852a687d27294306e1d068 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ aa11631852852a687d27294306e1d068 != \a\a\1\1\6\3\1\8\5\2\8\5\2\a\6\8\7\d\2\7\2\9\4\3\0\6\e\1\d\0\6\8 ]] 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:45.392 12:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:45.392 [2024-11-27 12:15:35.356933] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:45.392 [2024-11-27 12:15:35.357245] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84168 ] 00:31:45.652 [2024-11-27 12:15:35.537730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.652 [2024-11-27 12:15:35.642643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.558  [2024-11-27T12:15:38.180Z] Copying: 588/1024 [MB] (588 MBps) [2024-11-27T12:15:40.717Z] Copying: 1024/1024 [MB] (average 588 MBps) 00:31:50.664 00:31:50.664 12:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:50.664 12:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b425c2862bd36bf7f43135735ed0dd0 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b425c2862bd36bf7f43135735ed0dd0 != \2\b\4\2\5\c\2\8\6\2\b\d\3\6\b\f\7\f\4\3\1\3\5\7\3\5\e\d\0\d\d\0 ]] 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84013 ]] 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84013 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84241 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84241 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84241 ']' 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.572 12:15:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:52.572 [2024-11-27 12:15:42.259552] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:52.572 [2024-11-27 12:15:42.259668] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84241 ] 00:31:52.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84013 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:52.572 [2024-11-27 12:15:42.439006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.572 [2024-11-27 12:15:42.567661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.957 [2024-11-27 12:15:43.639423] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:53.957 [2024-11-27 12:15:43.639510] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:53.957 [2024-11-27 12:15:43.787532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.787583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:53.957 [2024-11-27 12:15:43.787603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:53.957 [2024-11-27 12:15:43.787615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.787689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.787705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:53.957 [2024-11-27 12:15:43.787718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:53.957 [2024-11-27 12:15:43.787729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.787756] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:53.957 [2024-11-27 12:15:43.788710] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:53.957 [2024-11-27 12:15:43.788738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.788761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:53.957 [2024-11-27 12:15:43.788774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.988 ms 00:31:53.957 [2024-11-27 12:15:43.788785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.789383] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:53.957 [2024-11-27 12:15:43.814254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.814300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:53.957 [2024-11-27 12:15:43.814319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.911 ms 00:31:53.957 [2024-11-27 12:15:43.814333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.827840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.827881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:53.957 [2024-11-27 12:15:43.827895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:53.957 [2024-11-27 12:15:43.827907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.828435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.828460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:53.957 [2024-11-27 12:15:43.828473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:31:53.957 [2024-11-27 12:15:43.828485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.828556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.828572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:53.957 [2024-11-27 12:15:43.828584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:53.957 [2024-11-27 12:15:43.828595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.828627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.828639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:53.957 [2024-11-27 12:15:43.828651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:53.957 [2024-11-27 12:15:43.828663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.828690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:53.957 [2024-11-27 12:15:43.832558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.832590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:53.957 [2024-11-27 12:15:43.832604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.879 ms 00:31:53.957 [2024-11-27 12:15:43.832620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.832650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.832664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:53.957 [2024-11-27 12:15:43.832676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:53.957 [2024-11-27 12:15:43.832687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.957 [2024-11-27 12:15:43.832728] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:53.957 [2024-11-27 12:15:43.832756] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:53.957 [2024-11-27 12:15:43.832795] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:53.957 [2024-11-27 12:15:43.832818] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:53.957 [2024-11-27 12:15:43.832909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:53.957 [2024-11-27 12:15:43.832925] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:53.957 [2024-11-27 12:15:43.832941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:53.957 [2024-11-27 12:15:43.832956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:53.957 [2024-11-27 12:15:43.832969] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:53.957 [2024-11-27 12:15:43.832983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:53.957 [2024-11-27 12:15:43.832994] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:53.957 [2024-11-27 12:15:43.833006] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:53.957 [2024-11-27 12:15:43.833018] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:53.957 [2024-11-27 12:15:43.833035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.957 [2024-11-27 12:15:43.833046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:53.957 [2024-11-27 12:15:43.833057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:31:53.957 [2024-11-27 12:15:43.833069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.833140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.833153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:53.958 [2024-11-27 12:15:43.833164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:31:53.958 [2024-11-27 12:15:43.833176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.833264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:53.958 [2024-11-27 12:15:43.833284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:53.958 [2024-11-27 12:15:43.833297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:53.958 [2024-11-27 12:15:43.833333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:53.958 [2024-11-27 12:15:43.833381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:53.958 [2024-11-27 12:15:43.833395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:53.958 [2024-11-27 12:15:43.833408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:53.958 [2024-11-27 12:15:43.833429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:53.958 [2024-11-27 12:15:43.833440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:53.958 [2024-11-27 12:15:43.833462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:53.958 [2024-11-27 12:15:43.833473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:53.958 [2024-11-27 12:15:43.833494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:53.958 [2024-11-27 12:15:43.833504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:53.958 [2024-11-27 12:15:43.833526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:53.958 [2024-11-27 12:15:43.833571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:53.958 [2024-11-27 12:15:43.833603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:53.958 [2024-11-27 12:15:43.833634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:53.958 [2024-11-27 12:15:43.833665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:53.958 [2024-11-27 12:15:43.833707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:53.958 [2024-11-27 12:15:43.833738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:53.958 [2024-11-27 12:15:43.833770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:53.958 [2024-11-27 12:15:43.833781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833792] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:53.958 [2024-11-27 12:15:43.833805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:53.958 [2024-11-27 12:15:43.833816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:53.958 [2024-11-27 12:15:43.833839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:53.958 [2024-11-27 12:15:43.833850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:53.958 [2024-11-27 12:15:43.833860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:53.958 [2024-11-27 12:15:43.833871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:53.958 [2024-11-27 12:15:43.833882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:53.958 [2024-11-27 12:15:43.833892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:53.958 [2024-11-27 12:15:43.833905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:53.958 [2024-11-27 12:15:43.833919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.833931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:53.958 [2024-11-27 12:15:43.833943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.833955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.833967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:53.958 [2024-11-27 12:15:43.833979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:53.958 [2024-11-27 12:15:43.833991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:53.958 [2024-11-27 12:15:43.834003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:53.958 [2024-11-27 12:15:43.834015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:53.958 [2024-11-27 12:15:43.834095] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:53.958 [2024-11-27 12:15:43.834107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:53.958 [2024-11-27 12:15:43.834139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:53.958 [2024-11-27 12:15:43.834153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:53.958 [2024-11-27 12:15:43.834166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:53.958 [2024-11-27 12:15:43.834179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.834191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:53.958 [2024-11-27 12:15:43.834204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.967 ms 00:31:53.958 [2024-11-27 12:15:43.834215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.873898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.873933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:53.958 [2024-11-27 12:15:43.873948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.693 ms 00:31:53.958 [2024-11-27 12:15:43.873961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.874007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.874020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:53.958 [2024-11-27 12:15:43.874034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:53.958 [2024-11-27 12:15:43.874045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.924149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.924183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:53.958 [2024-11-27 12:15:43.924198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.116 ms 00:31:53.958 [2024-11-27 12:15:43.924211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.924257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.958 [2024-11-27 12:15:43.924270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:53.958 [2024-11-27 12:15:43.924282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:53.958 [2024-11-27 12:15:43.924300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.958 [2024-11-27 12:15:43.924470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.959 [2024-11-27 12:15:43.924486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:53.959 [2024-11-27 12:15:43.924500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:31:53.959 [2024-11-27 12:15:43.924511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.959 [2024-11-27 12:15:43.924561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.959 [2024-11-27 12:15:43.924575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:53.959 [2024-11-27 12:15:43.924587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:53.959 [2024-11-27 12:15:43.924606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.959 [2024-11-27 12:15:43.950479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.959 [2024-11-27 12:15:43.950513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:53.959 [2024-11-27 12:15:43.950528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.886 ms 00:31:53.959 [2024-11-27 12:15:43.950545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:53.959 [2024-11-27 12:15:43.950678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:53.959 [2024-11-27 12:15:43.950695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:53.959 [2024-11-27 12:15:43.950708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:53.959 [2024-11-27 12:15:43.950721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.008101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.008143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:54.219 [2024-11-27 12:15:44.008161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.448 ms 00:31:54.219 [2024-11-27 12:15:44.008175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.021552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.021598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:54.219 [2024-11-27 12:15:44.021612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:31:54.219 [2024-11-27 12:15:44.021624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.111909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.111970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:54.219 [2024-11-27 12:15:44.111988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.348 ms 00:31:54.219 [2024-11-27 12:15:44.112002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.112239] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:54.219 [2024-11-27 12:15:44.112434] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:54.219 [2024-11-27 12:15:44.112608] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:54.219 [2024-11-27 12:15:44.112779] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:54.219 [2024-11-27 12:15:44.112795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.112808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:54.219 [2024-11-27 12:15:44.112822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.741 ms 00:31:54.219 [2024-11-27 12:15:44.112834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.112905] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:54.219 [2024-11-27 12:15:44.112922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.112941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:54.219 [2024-11-27 12:15:44.112955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:54.219 [2024-11-27 12:15:44.112968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.134382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.134428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:54.219 [2024-11-27 12:15:44.134444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.397 ms 00:31:54.219 [2024-11-27 12:15:44.134458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.147479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.147516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:54.219 [2024-11-27 12:15:44.147530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:54.219 [2024-11-27 12:15:44.147543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.219 [2024-11-27 12:15:44.147680] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:54.219 [2024-11-27 12:15:44.147998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.219 [2024-11-27 12:15:44.148012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:54.219 [2024-11-27 12:15:44.148025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:31:54.219 [2024-11-27 12:15:44.148036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.787 [2024-11-27 12:15:44.733539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.787 [2024-11-27 12:15:44.733581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:54.787 [2024-11-27 12:15:44.733597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 585.396 ms 00:31:54.787 [2024-11-27 12:15:44.733609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.787 [2024-11-27 12:15:44.739337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.787 [2024-11-27 12:15:44.739387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:54.787 [2024-11-27 12:15:44.739403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.411 ms 00:31:54.787 [2024-11-27 12:15:44.739423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.787 [2024-11-27 12:15:44.739917] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:54.788 [2024-11-27 12:15:44.739948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.788 [2024-11-27 12:15:44.739960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:54.788 [2024-11-27 12:15:44.739973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.490 ms 00:31:54.788 [2024-11-27 12:15:44.739986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.788 [2024-11-27 12:15:44.740020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.788 [2024-11-27 12:15:44.740034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:54.788 [2024-11-27 12:15:44.740046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:54.788 [2024-11-27 12:15:44.740065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.788 [2024-11-27 12:15:44.740105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 593.395 ms, result 0 00:31:54.788 [2024-11-27 12:15:44.740149] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:54.788 [2024-11-27 12:15:44.740312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.788 [2024-11-27 12:15:44.740324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:54.788 [2024-11-27 12:15:44.740336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:31:54.788 [2024-11-27 12:15:44.740346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.322790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.322832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:55.354 [2024-11-27 12:15:45.322863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 582.252 ms 00:31:55.354 [2024-11-27 12:15:45.322875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.328204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.328242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:55.354 [2024-11-27 12:15:45.328255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.313 ms 00:31:55.354 [2024-11-27 12:15:45.328266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.328886] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:55.354 [2024-11-27 12:15:45.328918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.328929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:55.354 [2024-11-27 12:15:45.328943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.621 ms 00:31:55.354 [2024-11-27 12:15:45.328954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.328990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.329003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:55.354 [2024-11-27 12:15:45.329014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:55.354 [2024-11-27 12:15:45.329026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.329065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 589.870 ms, result 0 00:31:55.354 [2024-11-27 12:15:45.329109] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:55.354 [2024-11-27 12:15:45.329124] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:55.354 [2024-11-27 12:15:45.329138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.329150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:55.354 [2024-11-27 12:15:45.329162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1183.406 ms 00:31:55.354 [2024-11-27 12:15:45.329173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.329208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.329227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:55.354 [2024-11-27 12:15:45.329241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:55.354 [2024-11-27 12:15:45.329253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.339748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:55.354 [2024-11-27 12:15:45.339896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.339911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:55.354 [2024-11-27 12:15:45.339925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.641 ms 00:31:55.354 [2024-11-27 12:15:45.339937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.340535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.340564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:55.354 [2024-11-27 12:15:45.340578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:31:55.354 [2024-11-27 12:15:45.340590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.342523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:55.354 [2024-11-27 12:15:45.342537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.888 ms 00:31:55.354 [2024-11-27 12:15:45.342548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.342606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:55.354 [2024-11-27 12:15:45.342624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:55.354 [2024-11-27 12:15:45.342635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.342755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:55.354 [2024-11-27 12:15:45.342767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:55.354 [2024-11-27 12:15:45.342778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.342816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:55.354 [2024-11-27 12:15:45.342828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:55.354 [2024-11-27 12:15:45.342840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342886] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:55.354 [2024-11-27 12:15:45.342900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.342911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:55.354 [2024-11-27 12:15:45.342923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:55.354 [2024-11-27 12:15:45.342934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.342993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.354 [2024-11-27 12:15:45.343006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:55.354 [2024-11-27 12:15:45.343018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:55.354 [2024-11-27 12:15:45.343030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.354 [2024-11-27 12:15:45.344386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1558.835 ms, result 0 00:31:55.354 [2024-11-27 12:15:45.356741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.354 [2024-11-27 12:15:45.372708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:55.354 [2024-11-27 12:15:45.382335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:55.613 Validate MD5 checksum, iteration 1 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:55.613 12:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:55.613 [2024-11-27 12:15:45.504936] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:55.613 [2024-11-27 12:15:45.505044] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84281 ] 00:31:55.872 [2024-11-27 12:15:45.682523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.872 [2024-11-27 12:15:45.785929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.778  [2024-11-27T12:15:48.399Z] Copying: 581/1024 [MB] (581 MBps) [2024-11-27T12:15:50.936Z] Copying: 1024/1024 [MB] (average 574 MBps) 00:32:00.883 00:32:00.883 12:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:00.883 12:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:02.261 Validate MD5 checksum, iteration 2 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=aa11631852852a687d27294306e1d068 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ aa11631852852a687d27294306e1d068 != \a\a\1\1\6\3\1\8\5\2\8\5\2\a\6\8\7\d\2\7\2\9\4\3\0\6\e\1\d\0\6\8 ]] 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:02.261 12:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:02.520 [2024-11-27 12:15:52.391501] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:02.520 [2024-11-27 12:15:52.392318] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84358 ] 00:32:02.779 [2024-11-27 12:15:52.588236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.779 [2024-11-27 12:15:52.693636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.690  [2024-11-27T12:15:55.002Z] Copying: 639/1024 [MB] (639 MBps) [2024-11-27T12:15:56.381Z] Copying: 1024/1024 [MB] (average 629 MBps) 00:32:06.328 00:32:06.328 12:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:06.328 12:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b425c2862bd36bf7f43135735ed0dd0 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b425c2862bd36bf7f43135735ed0dd0 != \2\b\4\2\5\c\2\8\6\2\b\d\3\6\b\f\7\f\4\3\1\3\5\7\3\5\e\d\0\d\d\0 ]] 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84241 ]] 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84241 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84241 ']' 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84241 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.233 12:15:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84241 00:32:08.233 killing process with pid 84241 00:32:08.233 12:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.233 12:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.233 12:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84241' 00:32:08.233 12:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84241 00:32:08.233 12:15:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84241 00:32:09.171 [2024-11-27 12:15:59.170331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:09.171 [2024-11-27 12:15:59.190865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.190910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:09.171 [2024-11-27 12:15:59.190926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:09.171 [2024-11-27 12:15:59.190937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.190962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:09.171 [2024-11-27 12:15:59.195450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.195483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:09.171 [2024-11-27 12:15:59.195501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.478 ms 00:32:09.171 [2024-11-27 12:15:59.195511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.195730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.195744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:09.171 [2024-11-27 12:15:59.195755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:32:09.171 [2024-11-27 12:15:59.195765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.196927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.196961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:09.171 [2024-11-27 12:15:59.196973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:32:09.171 [2024-11-27 12:15:59.196990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.197868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.197899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:09.171 [2024-11-27 12:15:59.197911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:32:09.171 [2024-11-27 12:15:59.197921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.211929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.211965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:09.171 [2024-11-27 12:15:59.211986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.988 ms 00:32:09.171 [2024-11-27 12:15:59.211996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.219710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.219746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:09.171 [2024-11-27 12:15:59.219759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.685 ms 00:32:09.171 [2024-11-27 12:15:59.219769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.171 [2024-11-27 12:15:59.219865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.171 [2024-11-27 12:15:59.219878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:09.171 [2024-11-27 12:15:59.219890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:09.171 [2024-11-27 12:15:59.219907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.233692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.233741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:09.431 [2024-11-27 12:15:59.233753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.790 ms 00:32:09.431 [2024-11-27 12:15:59.233763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.247918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.247950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:09.431 [2024-11-27 12:15:59.247961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.140 ms 00:32:09.431 [2024-11-27 12:15:59.247970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.261834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.261867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:09.431 [2024-11-27 12:15:59.261879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.852 ms 00:32:09.431 [2024-11-27 12:15:59.261888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.275309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.275342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:09.431 [2024-11-27 12:15:59.275354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.373 ms 00:32:09.431 [2024-11-27 12:15:59.275371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.275402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:09.431 [2024-11-27 12:15:59.275418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:09.431 [2024-11-27 12:15:59.275431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:09.431 [2024-11-27 12:15:59.275442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:09.431 [2024-11-27 12:15:59.275453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:09.431 [2024-11-27 12:15:59.275607] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:09.431 [2024-11-27 12:15:59.275617] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f988fc2c-fca4-4808-9592-ac6190986ab8 00:32:09.431 [2024-11-27 12:15:59.275628] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:09.431 [2024-11-27 12:15:59.275638] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:09.431 [2024-11-27 12:15:59.275648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:09.431 [2024-11-27 12:15:59.275659] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:09.431 [2024-11-27 12:15:59.275668] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:09.431 [2024-11-27 12:15:59.275679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:09.431 [2024-11-27 12:15:59.275696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:09.431 [2024-11-27 12:15:59.275705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:09.431 [2024-11-27 12:15:59.275714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:09.431 [2024-11-27 12:15:59.275724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.275736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:09.431 [2024-11-27 12:15:59.275747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:32:09.431 [2024-11-27 12:15:59.275756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.295677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.431 [2024-11-27 12:15:59.295709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:09.431 [2024-11-27 12:15:59.295723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.922 ms 00:32:09.431 [2024-11-27 12:15:59.295733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.431 [2024-11-27 12:15:59.296317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.432 [2024-11-27 12:15:59.296335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:09.432 [2024-11-27 12:15:59.296346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:32:09.432 [2024-11-27 12:15:59.296372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.432 [2024-11-27 12:15:59.362533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.432 [2024-11-27 12:15:59.362566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:09.432 [2024-11-27 12:15:59.362579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.432 [2024-11-27 12:15:59.362595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.432 [2024-11-27 12:15:59.362629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.432 [2024-11-27 12:15:59.362640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:09.432 [2024-11-27 12:15:59.362651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.432 [2024-11-27 12:15:59.362661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.432 [2024-11-27 12:15:59.362739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.432 [2024-11-27 12:15:59.362754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:09.432 [2024-11-27 12:15:59.362765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.432 [2024-11-27 12:15:59.362775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.432 [2024-11-27 12:15:59.362800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.432 [2024-11-27 12:15:59.362811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:09.432 [2024-11-27 12:15:59.362821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.432 [2024-11-27 12:15:59.362831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.489758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.489810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:09.691 [2024-11-27 12:15:59.489826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.489838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.591639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.591689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:09.691 [2024-11-27 12:15:59.591705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.591715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.591852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.591866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:09.691 [2024-11-27 12:15:59.591878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.591889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.591945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.591976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:09.691 [2024-11-27 12:15:59.591988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.591998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.592128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.592140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:09.691 [2024-11-27 12:15:59.592151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.592161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.592199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.592212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:09.691 [2024-11-27 12:15:59.592226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.592236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.592285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.592296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:09.691 [2024-11-27 12:15:59.592307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.592318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.592394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.691 [2024-11-27 12:15:59.592412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:09.691 [2024-11-27 12:15:59.592424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.691 [2024-11-27 12:15:59.592434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.691 [2024-11-27 12:15:59.592579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 402.324 ms, result 0 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:11.072 Remove shared memory files 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84013 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:11.072 00:32:11.072 real 1m28.070s 00:32:11.072 user 1m56.809s 00:32:11.072 sys 0m25.747s 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.072 ************************************ 00:32:11.072 END TEST ftl_upgrade_shutdown 00:32:11.072 12:16:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 ************************************ 00:32:11.072 12:16:01 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:11.072 12:16:01 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:11.072 12:16:01 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:11.072 12:16:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.072 12:16:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 ************************************ 00:32:11.072 START TEST ftl_restore_fast 00:32:11.072 ************************************ 00:32:11.072 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:11.358 * Looking for test storage... 00:32:11.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.358 --rc genhtml_branch_coverage=1 00:32:11.358 --rc genhtml_function_coverage=1 00:32:11.358 --rc genhtml_legend=1 00:32:11.358 --rc geninfo_all_blocks=1 00:32:11.358 --rc geninfo_unexecuted_blocks=1 00:32:11.358 00:32:11.358 ' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.358 --rc genhtml_branch_coverage=1 00:32:11.358 --rc genhtml_function_coverage=1 00:32:11.358 --rc genhtml_legend=1 00:32:11.358 --rc geninfo_all_blocks=1 00:32:11.358 --rc geninfo_unexecuted_blocks=1 00:32:11.358 00:32:11.358 ' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.358 --rc genhtml_branch_coverage=1 00:32:11.358 --rc genhtml_function_coverage=1 00:32:11.358 --rc genhtml_legend=1 00:32:11.358 --rc geninfo_all_blocks=1 00:32:11.358 --rc geninfo_unexecuted_blocks=1 00:32:11.358 00:32:11.358 ' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:11.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.358 --rc genhtml_branch_coverage=1 00:32:11.358 --rc genhtml_function_coverage=1 00:32:11.358 --rc genhtml_legend=1 00:32:11.358 --rc geninfo_all_blocks=1 00:32:11.358 --rc geninfo_unexecuted_blocks=1 00:32:11.358 00:32:11.358 ' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.H4Uwdkm9X5 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:11.358 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84528 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84528 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84528 ']' 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.359 12:16:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:11.619 [2024-11-27 12:16:01.419144] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:11.619 [2024-11-27 12:16:01.419272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84528 ] 00:32:11.619 [2024-11-27 12:16:01.596107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.878 [2024-11-27 12:16:01.730983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:12.817 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:13.076 12:16:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:13.335 { 00:32:13.335 "name": "nvme0n1", 00:32:13.335 "aliases": [ 00:32:13.335 "145ff502-926c-472e-9456-084e11939de4" 00:32:13.335 ], 00:32:13.335 "product_name": "NVMe disk", 00:32:13.335 "block_size": 4096, 00:32:13.335 "num_blocks": 1310720, 00:32:13.335 "uuid": "145ff502-926c-472e-9456-084e11939de4", 00:32:13.335 "numa_id": -1, 00:32:13.335 "assigned_rate_limits": { 00:32:13.335 "rw_ios_per_sec": 0, 00:32:13.335 "rw_mbytes_per_sec": 0, 00:32:13.335 "r_mbytes_per_sec": 0, 00:32:13.335 "w_mbytes_per_sec": 0 00:32:13.335 }, 00:32:13.335 "claimed": true, 00:32:13.335 "claim_type": "read_many_write_one", 00:32:13.335 "zoned": false, 00:32:13.335 "supported_io_types": { 00:32:13.335 "read": true, 00:32:13.335 "write": true, 00:32:13.335 "unmap": true, 00:32:13.335 "flush": true, 00:32:13.335 "reset": true, 00:32:13.335 "nvme_admin": true, 00:32:13.335 "nvme_io": true, 00:32:13.335 "nvme_io_md": false, 00:32:13.335 "write_zeroes": true, 00:32:13.335 "zcopy": false, 00:32:13.335 "get_zone_info": false, 00:32:13.335 "zone_management": false, 00:32:13.335 "zone_append": false, 00:32:13.335 "compare": true, 00:32:13.335 "compare_and_write": false, 00:32:13.335 "abort": true, 00:32:13.335 "seek_hole": false, 00:32:13.335 "seek_data": false, 00:32:13.335 "copy": true, 00:32:13.335 "nvme_iov_md": false 00:32:13.335 }, 00:32:13.335 "driver_specific": { 00:32:13.335 "nvme": [ 00:32:13.335 { 00:32:13.335 "pci_address": "0000:00:11.0", 00:32:13.335 "trid": { 00:32:13.335 "trtype": "PCIe", 00:32:13.335 "traddr": "0000:00:11.0" 00:32:13.335 }, 00:32:13.335 "ctrlr_data": { 00:32:13.335 "cntlid": 0, 00:32:13.335 "vendor_id": "0x1b36", 00:32:13.335 "model_number": "QEMU NVMe Ctrl", 00:32:13.335 "serial_number": "12341", 00:32:13.335 "firmware_revision": "8.0.0", 00:32:13.335 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:13.335 "oacs": { 00:32:13.335 "security": 0, 00:32:13.335 "format": 1, 00:32:13.335 "firmware": 0, 00:32:13.335 "ns_manage": 1 00:32:13.335 }, 00:32:13.335 "multi_ctrlr": false, 00:32:13.335 "ana_reporting": false 00:32:13.335 }, 00:32:13.335 "vs": { 00:32:13.335 "nvme_version": "1.4" 00:32:13.335 }, 00:32:13.335 "ns_data": { 00:32:13.335 "id": 1, 00:32:13.335 "can_share": false 00:32:13.335 } 00:32:13.335 } 00:32:13.335 ], 00:32:13.335 "mp_policy": "active_passive" 00:32:13.335 } 00:32:13.335 } 00:32:13.335 ]' 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:13.335 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:13.594 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=337b4f46-63dc-4369-871a-88c846e5c43c 00:32:13.594 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:13.594 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 337b4f46-63dc-4369-871a-88c846e5c43c 00:32:13.851 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:13.851 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ee1e7619-443c-44c5-99bc-2e2f5c00ecba 00:32:13.851 12:16:03 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ee1e7619-443c-44c5-99bc-2e2f5c00ecba 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:14.109 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:14.110 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:14.369 { 00:32:14.369 "name": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:14.369 "aliases": [ 00:32:14.369 "lvs/nvme0n1p0" 00:32:14.369 ], 00:32:14.369 "product_name": "Logical Volume", 00:32:14.369 "block_size": 4096, 00:32:14.369 "num_blocks": 26476544, 00:32:14.369 "uuid": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:14.369 "assigned_rate_limits": { 00:32:14.369 "rw_ios_per_sec": 0, 00:32:14.369 "rw_mbytes_per_sec": 0, 00:32:14.369 "r_mbytes_per_sec": 0, 00:32:14.369 "w_mbytes_per_sec": 0 00:32:14.369 }, 00:32:14.369 "claimed": false, 00:32:14.369 "zoned": false, 00:32:14.369 "supported_io_types": { 00:32:14.369 "read": true, 00:32:14.369 "write": true, 00:32:14.369 "unmap": true, 00:32:14.369 "flush": false, 00:32:14.369 "reset": true, 00:32:14.369 "nvme_admin": false, 00:32:14.369 "nvme_io": false, 00:32:14.369 "nvme_io_md": false, 00:32:14.369 "write_zeroes": true, 00:32:14.369 "zcopy": false, 00:32:14.369 "get_zone_info": false, 00:32:14.369 "zone_management": false, 00:32:14.369 "zone_append": false, 00:32:14.369 "compare": false, 00:32:14.369 "compare_and_write": false, 00:32:14.369 "abort": false, 00:32:14.369 "seek_hole": true, 00:32:14.369 "seek_data": true, 00:32:14.369 "copy": false, 00:32:14.369 "nvme_iov_md": false 00:32:14.369 }, 00:32:14.369 "driver_specific": { 00:32:14.369 "lvol": { 00:32:14.369 "lvol_store_uuid": "ee1e7619-443c-44c5-99bc-2e2f5c00ecba", 00:32:14.369 "base_bdev": "nvme0n1", 00:32:14.369 "thin_provision": true, 00:32:14.369 "num_allocated_clusters": 0, 00:32:14.369 "snapshot": false, 00:32:14.369 "clone": false, 00:32:14.369 "esnap_clone": false 00:32:14.369 } 00:32:14.369 } 00:32:14.369 } 00:32:14.369 ]' 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:14.369 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:14.629 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:14.888 { 00:32:14.888 "name": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:14.888 "aliases": [ 00:32:14.888 "lvs/nvme0n1p0" 00:32:14.888 ], 00:32:14.888 "product_name": "Logical Volume", 00:32:14.888 "block_size": 4096, 00:32:14.888 "num_blocks": 26476544, 00:32:14.888 "uuid": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:14.888 "assigned_rate_limits": { 00:32:14.888 "rw_ios_per_sec": 0, 00:32:14.888 "rw_mbytes_per_sec": 0, 00:32:14.888 "r_mbytes_per_sec": 0, 00:32:14.888 "w_mbytes_per_sec": 0 00:32:14.888 }, 00:32:14.888 "claimed": false, 00:32:14.888 "zoned": false, 00:32:14.888 "supported_io_types": { 00:32:14.888 "read": true, 00:32:14.888 "write": true, 00:32:14.888 "unmap": true, 00:32:14.888 "flush": false, 00:32:14.888 "reset": true, 00:32:14.888 "nvme_admin": false, 00:32:14.888 "nvme_io": false, 00:32:14.888 "nvme_io_md": false, 00:32:14.888 "write_zeroes": true, 00:32:14.888 "zcopy": false, 00:32:14.888 "get_zone_info": false, 00:32:14.888 "zone_management": false, 00:32:14.888 "zone_append": false, 00:32:14.888 "compare": false, 00:32:14.888 "compare_and_write": false, 00:32:14.888 "abort": false, 00:32:14.888 "seek_hole": true, 00:32:14.888 "seek_data": true, 00:32:14.888 "copy": false, 00:32:14.888 "nvme_iov_md": false 00:32:14.888 }, 00:32:14.888 "driver_specific": { 00:32:14.888 "lvol": { 00:32:14.888 "lvol_store_uuid": "ee1e7619-443c-44c5-99bc-2e2f5c00ecba", 00:32:14.888 "base_bdev": "nvme0n1", 00:32:14.888 "thin_provision": true, 00:32:14.888 "num_allocated_clusters": 0, 00:32:14.888 "snapshot": false, 00:32:14.888 "clone": false, 00:32:14.888 "esnap_clone": false 00:32:14.888 } 00:32:14.888 } 00:32:14.888 } 00:32:14.888 ]' 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:14.888 12:16:04 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=145e516b-bd0e-48b1-8773-87a226d945d0 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:15.148 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 145e516b-bd0e-48b1-8773-87a226d945d0 00:32:15.406 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:15.406 { 00:32:15.406 "name": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:15.406 "aliases": [ 00:32:15.406 "lvs/nvme0n1p0" 00:32:15.406 ], 00:32:15.406 "product_name": "Logical Volume", 00:32:15.406 "block_size": 4096, 00:32:15.406 "num_blocks": 26476544, 00:32:15.406 "uuid": "145e516b-bd0e-48b1-8773-87a226d945d0", 00:32:15.406 "assigned_rate_limits": { 00:32:15.406 "rw_ios_per_sec": 0, 00:32:15.406 "rw_mbytes_per_sec": 0, 00:32:15.406 "r_mbytes_per_sec": 0, 00:32:15.406 "w_mbytes_per_sec": 0 00:32:15.406 }, 00:32:15.407 "claimed": false, 00:32:15.407 "zoned": false, 00:32:15.407 "supported_io_types": { 00:32:15.407 "read": true, 00:32:15.407 "write": true, 00:32:15.407 "unmap": true, 00:32:15.407 "flush": false, 00:32:15.407 "reset": true, 00:32:15.407 "nvme_admin": false, 00:32:15.407 "nvme_io": false, 00:32:15.407 "nvme_io_md": false, 00:32:15.407 "write_zeroes": true, 00:32:15.407 "zcopy": false, 00:32:15.407 "get_zone_info": false, 00:32:15.407 "zone_management": false, 00:32:15.407 "zone_append": false, 00:32:15.407 "compare": false, 00:32:15.407 "compare_and_write": false, 00:32:15.407 "abort": false, 00:32:15.407 "seek_hole": true, 00:32:15.407 "seek_data": true, 00:32:15.407 "copy": false, 00:32:15.407 "nvme_iov_md": false 00:32:15.407 }, 00:32:15.407 "driver_specific": { 00:32:15.407 "lvol": { 00:32:15.407 "lvol_store_uuid": "ee1e7619-443c-44c5-99bc-2e2f5c00ecba", 00:32:15.407 "base_bdev": "nvme0n1", 00:32:15.407 "thin_provision": true, 00:32:15.407 "num_allocated_clusters": 0, 00:32:15.407 "snapshot": false, 00:32:15.407 "clone": false, 00:32:15.407 "esnap_clone": false 00:32:15.407 } 00:32:15.407 } 00:32:15.407 } 00:32:15.407 ]' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 145e516b-bd0e-48b1-8773-87a226d945d0 --l2p_dram_limit 10' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:15.407 12:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 145e516b-bd0e-48b1-8773-87a226d945d0 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:15.667 [2024-11-27 12:16:05.599621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.667 [2024-11-27 12:16:05.599671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:15.667 [2024-11-27 12:16:05.599691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:15.667 [2024-11-27 12:16:05.599702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.667 [2024-11-27 12:16:05.599766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.667 [2024-11-27 12:16:05.599779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:15.667 [2024-11-27 12:16:05.599793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:15.667 [2024-11-27 12:16:05.599804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.667 [2024-11-27 12:16:05.599827] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:15.667 [2024-11-27 12:16:05.600672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:15.667 [2024-11-27 12:16:05.600709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.667 [2024-11-27 12:16:05.600720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:15.667 [2024-11-27 12:16:05.600734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:32:15.667 [2024-11-27 12:16:05.600744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.667 [2024-11-27 12:16:05.600819] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:32:15.667 [2024-11-27 12:16:05.603208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.667 [2024-11-27 12:16:05.603249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:15.667 [2024-11-27 12:16:05.603262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:15.668 [2024-11-27 12:16:05.603276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.617276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.617311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:15.668 [2024-11-27 12:16:05.617324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.982 ms 00:32:15.668 [2024-11-27 12:16:05.617337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.617448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.617466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:15.668 [2024-11-27 12:16:05.617477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:15.668 [2024-11-27 12:16:05.617496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.617555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.617570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:15.668 [2024-11-27 12:16:05.617584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:15.668 [2024-11-27 12:16:05.617597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.617619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:15.668 [2024-11-27 12:16:05.623818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.623851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:15.668 [2024-11-27 12:16:05.623868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:32:15.668 [2024-11-27 12:16:05.623879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.623915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.623927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:15.668 [2024-11-27 12:16:05.623940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:15.668 [2024-11-27 12:16:05.623950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.623987] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:15.668 [2024-11-27 12:16:05.624117] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:15.668 [2024-11-27 12:16:05.624139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:15.668 [2024-11-27 12:16:05.624153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:15.668 [2024-11-27 12:16:05.624170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624181] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:15.668 [2024-11-27 12:16:05.624210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:15.668 [2024-11-27 12:16:05.624223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:15.668 [2024-11-27 12:16:05.624233] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:15.668 [2024-11-27 12:16:05.624246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.624267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:15.668 [2024-11-27 12:16:05.624281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:32:15.668 [2024-11-27 12:16:05.624291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.624379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.668 [2024-11-27 12:16:05.624391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:15.668 [2024-11-27 12:16:05.624404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:15.668 [2024-11-27 12:16:05.624414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.668 [2024-11-27 12:16:05.624510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:15.668 [2024-11-27 12:16:05.624529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:15.668 [2024-11-27 12:16:05.624544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:15.668 [2024-11-27 12:16:05.624578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:15.668 [2024-11-27 12:16:05.624612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.668 [2024-11-27 12:16:05.624634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:15.668 [2024-11-27 12:16:05.624643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:15.668 [2024-11-27 12:16:05.624657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.668 [2024-11-27 12:16:05.624666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:15.668 [2024-11-27 12:16:05.624678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:15.668 [2024-11-27 12:16:05.624687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:15.668 [2024-11-27 12:16:05.624712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:15.668 [2024-11-27 12:16:05.624746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:15.668 [2024-11-27 12:16:05.624775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:15.668 [2024-11-27 12:16:05.624806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:15.668 [2024-11-27 12:16:05.624838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.668 [2024-11-27 12:16:05.624857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:15.668 [2024-11-27 12:16:05.624873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.668 [2024-11-27 12:16:05.624894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:15.668 [2024-11-27 12:16:05.624903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:15.668 [2024-11-27 12:16:05.624915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.668 [2024-11-27 12:16:05.624923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:15.668 [2024-11-27 12:16:05.624934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:15.668 [2024-11-27 12:16:05.624943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:15.668 [2024-11-27 12:16:05.624963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:15.668 [2024-11-27 12:16:05.624975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.624984] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:15.668 [2024-11-27 12:16:05.624998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:15.668 [2024-11-27 12:16:05.625007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.668 [2024-11-27 12:16:05.625022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.668 [2024-11-27 12:16:05.625033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:15.668 [2024-11-27 12:16:05.625049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:15.668 [2024-11-27 12:16:05.625058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:15.668 [2024-11-27 12:16:05.625070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:15.668 [2024-11-27 12:16:05.625080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:15.668 [2024-11-27 12:16:05.625092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:15.668 [2024-11-27 12:16:05.625106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:15.668 [2024-11-27 12:16:05.625125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.668 [2024-11-27 12:16:05.625136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:15.668 [2024-11-27 12:16:05.625150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:15.668 [2024-11-27 12:16:05.625160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:15.668 [2024-11-27 12:16:05.625172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:15.668 [2024-11-27 12:16:05.625182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:15.668 [2024-11-27 12:16:05.625195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:15.668 [2024-11-27 12:16:05.625206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:15.669 [2024-11-27 12:16:05.625220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:15.669 [2024-11-27 12:16:05.625229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:15.669 [2024-11-27 12:16:05.625246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:15.669 [2024-11-27 12:16:05.625304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:15.669 [2024-11-27 12:16:05.625318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:15.669 [2024-11-27 12:16:05.625342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:15.669 [2024-11-27 12:16:05.625353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:15.669 [2024-11-27 12:16:05.625383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:15.669 [2024-11-27 12:16:05.625394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.669 [2024-11-27 12:16:05.625407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:15.669 [2024-11-27 12:16:05.625418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:32:15.669 [2024-11-27 12:16:05.625431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.669 [2024-11-27 12:16:05.625472] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:15.669 [2024-11-27 12:16:05.625491] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:19.065 [2024-11-27 12:16:09.063689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.065 [2024-11-27 12:16:09.063734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:19.065 [2024-11-27 12:16:09.063749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3443.798 ms 00:32:19.065 [2024-11-27 12:16:09.063764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.065 [2024-11-27 12:16:09.111393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.065 [2024-11-27 12:16:09.111437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.065 [2024-11-27 12:16:09.111452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.248 ms 00:32:19.065 [2024-11-27 12:16:09.111466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.065 [2024-11-27 12:16:09.111628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.065 [2024-11-27 12:16:09.111646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:19.065 [2024-11-27 12:16:09.111657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:19.065 [2024-11-27 12:16:09.111678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.324 [2024-11-27 12:16:09.164316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.324 [2024-11-27 12:16:09.164364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.324 [2024-11-27 12:16:09.164378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.682 ms 00:32:19.324 [2024-11-27 12:16:09.164392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.324 [2024-11-27 12:16:09.164439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.324 [2024-11-27 12:16:09.164454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.324 [2024-11-27 12:16:09.164465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:19.324 [2024-11-27 12:16:09.164489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.324 [2024-11-27 12:16:09.165305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.324 [2024-11-27 12:16:09.165332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.324 [2024-11-27 12:16:09.165343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:32:19.324 [2024-11-27 12:16:09.165366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.324 [2024-11-27 12:16:09.165472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.165491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.325 [2024-11-27 12:16:09.165502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:32:19.325 [2024-11-27 12:16:09.165519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.325 [2024-11-27 12:16:09.190218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.190255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.325 [2024-11-27 12:16:09.190269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.717 ms 00:32:19.325 [2024-11-27 12:16:09.190283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.325 [2024-11-27 12:16:09.214558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:19.325 [2024-11-27 12:16:09.219700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.219729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:19.325 [2024-11-27 12:16:09.219745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.358 ms 00:32:19.325 [2024-11-27 12:16:09.219764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.325 [2024-11-27 12:16:09.308922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.308961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:19.325 [2024-11-27 12:16:09.308979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.266 ms 00:32:19.325 [2024-11-27 12:16:09.308990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.325 [2024-11-27 12:16:09.309173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.309187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:19.325 [2024-11-27 12:16:09.309205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:32:19.325 [2024-11-27 12:16:09.309215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.325 [2024-11-27 12:16:09.343551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.325 [2024-11-27 12:16:09.343587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:19.325 [2024-11-27 12:16:09.343606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.338 ms 00:32:19.325 [2024-11-27 12:16:09.343616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.377220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.377254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:19.584 [2024-11-27 12:16:09.377272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.608 ms 00:32:19.584 [2024-11-27 12:16:09.377282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.378046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.378075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:19.584 [2024-11-27 12:16:09.378094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:32:19.584 [2024-11-27 12:16:09.378105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.475197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.475235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:19.584 [2024-11-27 12:16:09.475258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.193 ms 00:32:19.584 [2024-11-27 12:16:09.475269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.511700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.511736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:19.584 [2024-11-27 12:16:09.511753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.397 ms 00:32:19.584 [2024-11-27 12:16:09.511765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.545152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.545187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:19.584 [2024-11-27 12:16:09.545203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.396 ms 00:32:19.584 [2024-11-27 12:16:09.545214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.579743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.579779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:19.584 [2024-11-27 12:16:09.579795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.540 ms 00:32:19.584 [2024-11-27 12:16:09.579805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.579853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.579865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:19.584 [2024-11-27 12:16:09.579883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:19.584 [2024-11-27 12:16:09.579893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.580013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.584 [2024-11-27 12:16:09.580029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:19.584 [2024-11-27 12:16:09.580044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:19.584 [2024-11-27 12:16:09.580054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.584 [2024-11-27 12:16:09.581372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3987.684 ms, result 0 00:32:19.584 { 00:32:19.584 "name": "ftl0", 00:32:19.584 "uuid": "9518d22f-ff7b-4dae-936b-af48dab8ee92" 00:32:19.584 } 00:32:19.584 12:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:19.584 12:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:19.844 12:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:19.844 12:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:20.103 [2024-11-27 12:16:09.967690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.103 [2024-11-27 12:16:09.967739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:20.103 [2024-11-27 12:16:09.967753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:20.103 [2024-11-27 12:16:09.967766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.103 [2024-11-27 12:16:09.967790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:20.103 [2024-11-27 12:16:09.972181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.103 [2024-11-27 12:16:09.972210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:20.103 [2024-11-27 12:16:09.972225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.376 ms 00:32:20.103 [2024-11-27 12:16:09.972235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.103 [2024-11-27 12:16:09.972495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.103 [2024-11-27 12:16:09.972511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:20.103 [2024-11-27 12:16:09.972526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:32:20.103 [2024-11-27 12:16:09.972535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:09.974876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:09.974898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:20.104 [2024-11-27 12:16:09.974912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:32:20.104 [2024-11-27 12:16:09.974922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:09.979578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:09.979608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:20.104 [2024-11-27 12:16:09.979623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:32:20.104 [2024-11-27 12:16:09.979633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.014714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.014751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:20.104 [2024-11-27 12:16:10.014768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.074 ms 00:32:20.104 [2024-11-27 12:16:10.014778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.038055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.038093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:20.104 [2024-11-27 12:16:10.038111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.265 ms 00:32:20.104 [2024-11-27 12:16:10.038122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.038280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.038295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:20.104 [2024-11-27 12:16:10.038311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:32:20.104 [2024-11-27 12:16:10.038321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.074620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.074674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:20.104 [2024-11-27 12:16:10.074714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.315 ms 00:32:20.104 [2024-11-27 12:16:10.074752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.111894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.111929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:20.104 [2024-11-27 12:16:10.111945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.075 ms 00:32:20.104 [2024-11-27 12:16:10.111955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.104 [2024-11-27 12:16:10.145932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.104 [2024-11-27 12:16:10.145968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:20.104 [2024-11-27 12:16:10.145984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.983 ms 00:32:20.104 [2024-11-27 12:16:10.145994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.366 [2024-11-27 12:16:10.179227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.366 [2024-11-27 12:16:10.179261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:20.366 [2024-11-27 12:16:10.179278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.185 ms 00:32:20.366 [2024-11-27 12:16:10.179287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.366 [2024-11-27 12:16:10.179329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:20.366 [2024-11-27 12:16:10.179346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.179987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:20.366 [2024-11-27 12:16:10.180155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:20.367 [2024-11-27 12:16:10.180647] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:20.367 [2024-11-27 12:16:10.180661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:32:20.367 [2024-11-27 12:16:10.180673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:20.367 [2024-11-27 12:16:10.180689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:20.367 [2024-11-27 12:16:10.180703] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:20.367 [2024-11-27 12:16:10.180715] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:20.367 [2024-11-27 12:16:10.180725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:20.367 [2024-11-27 12:16:10.180739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:20.367 [2024-11-27 12:16:10.180749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:20.367 [2024-11-27 12:16:10.180761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:20.367 [2024-11-27 12:16:10.180770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:20.367 [2024-11-27 12:16:10.180783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.367 [2024-11-27 12:16:10.180793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:20.367 [2024-11-27 12:16:10.180806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:32:20.367 [2024-11-27 12:16:10.180819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.367 [2024-11-27 12:16:10.200776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.367 [2024-11-27 12:16:10.200807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:20.367 [2024-11-27 12:16:10.200823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.935 ms 00:32:20.367 [2024-11-27 12:16:10.200834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.367 [2024-11-27 12:16:10.201440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.367 [2024-11-27 12:16:10.201462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:20.367 [2024-11-27 12:16:10.201481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:32:20.367 [2024-11-27 12:16:10.201491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.367 [2024-11-27 12:16:10.267720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.367 [2024-11-27 12:16:10.267763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:20.367 [2024-11-27 12:16:10.267781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.367 [2024-11-27 12:16:10.267792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.367 [2024-11-27 12:16:10.267858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.367 [2024-11-27 12:16:10.267869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:20.367 [2024-11-27 12:16:10.267886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.367 [2024-11-27 12:16:10.267896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.368 [2024-11-27 12:16:10.267984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.368 [2024-11-27 12:16:10.267997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:20.368 [2024-11-27 12:16:10.268012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.368 [2024-11-27 12:16:10.268022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.368 [2024-11-27 12:16:10.268051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.368 [2024-11-27 12:16:10.268061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:20.368 [2024-11-27 12:16:10.268075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.368 [2024-11-27 12:16:10.268088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.368 [2024-11-27 12:16:10.392455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.368 [2024-11-27 12:16:10.392507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:20.368 [2024-11-27 12:16:10.392526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.368 [2024-11-27 12:16:10.392537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:20.628 [2024-11-27 12:16:10.492163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:20.628 [2024-11-27 12:16:10.492335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:20.628 [2024-11-27 12:16:10.492454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:20.628 [2024-11-27 12:16:10.492627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:20.628 [2024-11-27 12:16:10.492720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.628 [2024-11-27 12:16:10.492789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.628 [2024-11-27 12:16:10.492801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:20.628 [2024-11-27 12:16:10.492814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.628 [2024-11-27 12:16:10.492824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.629 [2024-11-27 12:16:10.492880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:20.629 [2024-11-27 12:16:10.492898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:20.629 [2024-11-27 12:16:10.492912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:20.629 [2024-11-27 12:16:10.492922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.629 [2024-11-27 12:16:10.493083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.192 ms, result 0 00:32:20.629 true 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84528 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84528 ']' 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84528 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84528 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.629 killing process with pid 84528 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84528' 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84528 00:32:20.629 12:16:10 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84528 00:32:25.909 12:16:15 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:30.105 262144+0 records in 00:32:30.105 262144+0 records out 00:32:30.105 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.96694 s, 271 MB/s 00:32:30.105 12:16:19 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:31.487 12:16:21 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:31.487 [2024-11-27 12:16:21.322368] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:31.487 [2024-11-27 12:16:21.322501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84784 ] 00:32:31.487 [2024-11-27 12:16:21.507408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.747 [2024-11-27 12:16:21.639062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.007 [2024-11-27 12:16:22.053971] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.007 [2024-11-27 12:16:22.054043] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:32.268 [2024-11-27 12:16:22.218405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.218457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:32.268 [2024-11-27 12:16:22.218474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:32.268 [2024-11-27 12:16:22.218485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.218534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.218549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:32.268 [2024-11-27 12:16:22.218561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:32.268 [2024-11-27 12:16:22.218572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.218593] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:32.268 [2024-11-27 12:16:22.219597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:32.268 [2024-11-27 12:16:22.219625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.219636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:32.268 [2024-11-27 12:16:22.219646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:32:32.268 [2024-11-27 12:16:22.219656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.222103] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:32.268 [2024-11-27 12:16:22.241810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.241848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:32.268 [2024-11-27 12:16:22.241863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.740 ms 00:32:32.268 [2024-11-27 12:16:22.241873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.241942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.241955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:32.268 [2024-11-27 12:16:22.241966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:32.268 [2024-11-27 12:16:22.241976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.254384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.254411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:32.268 [2024-11-27 12:16:22.254422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.357 ms 00:32:32.268 [2024-11-27 12:16:22.254436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.254520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.254534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:32.268 [2024-11-27 12:16:22.254545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:32.268 [2024-11-27 12:16:22.254555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.254608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.254620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:32.268 [2024-11-27 12:16:22.254631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:32.268 [2024-11-27 12:16:22.254641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.254669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:32.268 [2024-11-27 12:16:22.260314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.260345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:32.268 [2024-11-27 12:16:22.260372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.660 ms 00:32:32.268 [2024-11-27 12:16:22.260382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.260414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.260425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:32.268 [2024-11-27 12:16:22.260436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:32.268 [2024-11-27 12:16:22.260445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.260481] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:32.268 [2024-11-27 12:16:22.260507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:32.268 [2024-11-27 12:16:22.260543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:32.268 [2024-11-27 12:16:22.260567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:32.268 [2024-11-27 12:16:22.260657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:32.268 [2024-11-27 12:16:22.260670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:32.268 [2024-11-27 12:16:22.260684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:32.268 [2024-11-27 12:16:22.260697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:32.268 [2024-11-27 12:16:22.260709] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:32.268 [2024-11-27 12:16:22.260720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:32.268 [2024-11-27 12:16:22.260730] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:32.268 [2024-11-27 12:16:22.260745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:32.268 [2024-11-27 12:16:22.260755] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:32.268 [2024-11-27 12:16:22.260765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.260775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:32.268 [2024-11-27 12:16:22.260785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:32:32.268 [2024-11-27 12:16:22.260795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.260862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.268 [2024-11-27 12:16:22.260873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:32.268 [2024-11-27 12:16:22.260883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:32.268 [2024-11-27 12:16:22.260892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.268 [2024-11-27 12:16:22.260983] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:32.268 [2024-11-27 12:16:22.261005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:32.268 [2024-11-27 12:16:22.261016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.268 [2024-11-27 12:16:22.261028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.268 [2024-11-27 12:16:22.261038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:32.269 [2024-11-27 12:16:22.261048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:32.269 [2024-11-27 12:16:22.261077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.269 [2024-11-27 12:16:22.261097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:32.269 [2024-11-27 12:16:22.261107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:32.269 [2024-11-27 12:16:22.261116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:32.269 [2024-11-27 12:16:22.261136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:32.269 [2024-11-27 12:16:22.261145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:32.269 [2024-11-27 12:16:22.261154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:32.269 [2024-11-27 12:16:22.261172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:32.269 [2024-11-27 12:16:22.261199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:32.269 [2024-11-27 12:16:22.261226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:32.269 [2024-11-27 12:16:22.261253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:32.269 [2024-11-27 12:16:22.261279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:32.269 [2024-11-27 12:16:22.261304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.269 [2024-11-27 12:16:22.261321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:32.269 [2024-11-27 12:16:22.261330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:32.269 [2024-11-27 12:16:22.261339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:32.269 [2024-11-27 12:16:22.261348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:32.269 [2024-11-27 12:16:22.261367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:32.269 [2024-11-27 12:16:22.261377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:32.269 [2024-11-27 12:16:22.261395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:32.269 [2024-11-27 12:16:22.261405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261413] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:32.269 [2024-11-27 12:16:22.261423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:32.269 [2024-11-27 12:16:22.261432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:32.269 [2024-11-27 12:16:22.261452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:32.269 [2024-11-27 12:16:22.261461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:32.269 [2024-11-27 12:16:22.261470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:32.269 [2024-11-27 12:16:22.261478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:32.269 [2024-11-27 12:16:22.261486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:32.269 [2024-11-27 12:16:22.261496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:32.269 [2024-11-27 12:16:22.261507] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:32.269 [2024-11-27 12:16:22.261518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:32.269 [2024-11-27 12:16:22.261547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:32.269 [2024-11-27 12:16:22.261557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:32.269 [2024-11-27 12:16:22.261567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:32.269 [2024-11-27 12:16:22.261577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:32.269 [2024-11-27 12:16:22.261586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:32.269 [2024-11-27 12:16:22.261596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:32.269 [2024-11-27 12:16:22.261606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:32.269 [2024-11-27 12:16:22.261616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:32.269 [2024-11-27 12:16:22.261626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:32.269 [2024-11-27 12:16:22.261674] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:32.269 [2024-11-27 12:16:22.261694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:32.269 [2024-11-27 12:16:22.261716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:32.269 [2024-11-27 12:16:22.261726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:32.269 [2024-11-27 12:16:22.261737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:32.269 [2024-11-27 12:16:22.261747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.269 [2024-11-27 12:16:22.261758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:32.269 [2024-11-27 12:16:22.261767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:32:32.269 [2024-11-27 12:16:22.261776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.269 [2024-11-27 12:16:22.310768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.269 [2024-11-27 12:16:22.310803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:32.269 [2024-11-27 12:16:22.310817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.021 ms 00:32:32.269 [2024-11-27 12:16:22.310833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.269 [2024-11-27 12:16:22.310910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.269 [2024-11-27 12:16:22.310922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:32.269 [2024-11-27 12:16:22.310933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:32.269 [2024-11-27 12:16:22.310943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.387632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.387669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:32.530 [2024-11-27 12:16:22.387683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.730 ms 00:32:32.530 [2024-11-27 12:16:22.387694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.387738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.387754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:32.530 [2024-11-27 12:16:22.387765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:32.530 [2024-11-27 12:16:22.387775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.388609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.388631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:32.530 [2024-11-27 12:16:22.388643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:32:32.530 [2024-11-27 12:16:22.388653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.388785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.388799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:32.530 [2024-11-27 12:16:22.388817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:32:32.530 [2024-11-27 12:16:22.388828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.411659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.411694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:32.530 [2024-11-27 12:16:22.411708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.846 ms 00:32:32.530 [2024-11-27 12:16:22.411719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.431061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:32.530 [2024-11-27 12:16:22.431099] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:32.530 [2024-11-27 12:16:22.431115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.431126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:32.530 [2024-11-27 12:16:22.431138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.301 ms 00:32:32.530 [2024-11-27 12:16:22.431148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.459593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.459637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:32.530 [2024-11-27 12:16:22.459650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.444 ms 00:32:32.530 [2024-11-27 12:16:22.459661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.476532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.476567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:32.530 [2024-11-27 12:16:22.476579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:32:32.530 [2024-11-27 12:16:22.476589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.493195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.493228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:32.530 [2024-11-27 12:16:22.493241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.595 ms 00:32:32.530 [2024-11-27 12:16:22.493251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.530 [2024-11-27 12:16:22.493939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.530 [2024-11-27 12:16:22.493966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:32.530 [2024-11-27 12:16:22.493978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:32:32.530 [2024-11-27 12:16:22.493993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.585961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.586017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:32.790 [2024-11-27 12:16:22.586034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.092 ms 00:32:32.790 [2024-11-27 12:16:22.586053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.596047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:32.790 [2024-11-27 12:16:22.599200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.599232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:32.790 [2024-11-27 12:16:22.599244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.121 ms 00:32:32.790 [2024-11-27 12:16:22.599255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.599327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.599341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:32.790 [2024-11-27 12:16:22.599352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:32.790 [2024-11-27 12:16:22.599371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.599455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.599468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:32.790 [2024-11-27 12:16:22.599479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:32.790 [2024-11-27 12:16:22.599489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.599516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.599527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:32.790 [2024-11-27 12:16:22.599537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:32.790 [2024-11-27 12:16:22.599547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.599586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:32.790 [2024-11-27 12:16:22.599602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.599613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:32.790 [2024-11-27 12:16:22.599624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:32.790 [2024-11-27 12:16:22.599634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.635293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.635330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:32.790 [2024-11-27 12:16:22.635345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.693 ms 00:32:32.790 [2024-11-27 12:16:22.635368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.790 [2024-11-27 12:16:22.635450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.790 [2024-11-27 12:16:22.635463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:32.790 [2024-11-27 12:16:22.635474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:32.790 [2024-11-27 12:16:22.635484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.791 [2024-11-27 12:16:22.636915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.672 ms, result 0 00:32:33.729  [2024-11-27T12:16:24.722Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-27T12:16:25.667Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-27T12:16:27.059Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-27T12:16:27.998Z] Copying: 95/1024 [MB] (23 MBps) [2024-11-27T12:16:28.941Z] Copying: 120/1024 [MB] (24 MBps) [2024-11-27T12:16:29.880Z] Copying: 144/1024 [MB] (24 MBps) [2024-11-27T12:16:30.820Z] Copying: 169/1024 [MB] (24 MBps) [2024-11-27T12:16:31.757Z] Copying: 193/1024 [MB] (24 MBps) [2024-11-27T12:16:32.696Z] Copying: 217/1024 [MB] (24 MBps) [2024-11-27T12:16:33.632Z] Copying: 241/1024 [MB] (24 MBps) [2024-11-27T12:16:35.009Z] Copying: 266/1024 [MB] (24 MBps) [2024-11-27T12:16:35.945Z] Copying: 290/1024 [MB] (24 MBps) [2024-11-27T12:16:36.882Z] Copying: 314/1024 [MB] (24 MBps) [2024-11-27T12:16:37.820Z] Copying: 339/1024 [MB] (24 MBps) [2024-11-27T12:16:38.757Z] Copying: 363/1024 [MB] (24 MBps) [2024-11-27T12:16:39.698Z] Copying: 388/1024 [MB] (24 MBps) [2024-11-27T12:16:40.636Z] Copying: 412/1024 [MB] (24 MBps) [2024-11-27T12:16:42.016Z] Copying: 436/1024 [MB] (23 MBps) [2024-11-27T12:16:42.954Z] Copying: 460/1024 [MB] (24 MBps) [2024-11-27T12:16:43.892Z] Copying: 484/1024 [MB] (24 MBps) [2024-11-27T12:16:44.830Z] Copying: 509/1024 [MB] (24 MBps) [2024-11-27T12:16:45.769Z] Copying: 533/1024 [MB] (24 MBps) [2024-11-27T12:16:46.708Z] Copying: 558/1024 [MB] (25 MBps) [2024-11-27T12:16:47.645Z] Copying: 583/1024 [MB] (25 MBps) [2024-11-27T12:16:49.023Z] Copying: 608/1024 [MB] (24 MBps) [2024-11-27T12:16:49.961Z] Copying: 633/1024 [MB] (24 MBps) [2024-11-27T12:16:50.898Z] Copying: 658/1024 [MB] (24 MBps) [2024-11-27T12:16:51.837Z] Copying: 682/1024 [MB] (24 MBps) [2024-11-27T12:16:52.775Z] Copying: 706/1024 [MB] (23 MBps) [2024-11-27T12:16:53.711Z] Copying: 730/1024 [MB] (23 MBps) [2024-11-27T12:16:54.672Z] Copying: 754/1024 [MB] (23 MBps) [2024-11-27T12:16:55.611Z] Copying: 777/1024 [MB] (23 MBps) [2024-11-27T12:16:56.992Z] Copying: 801/1024 [MB] (23 MBps) [2024-11-27T12:16:57.975Z] Copying: 825/1024 [MB] (24 MBps) [2024-11-27T12:16:58.607Z] Copying: 849/1024 [MB] (24 MBps) [2024-11-27T12:16:59.995Z] Copying: 874/1024 [MB] (24 MBps) [2024-11-27T12:17:00.934Z] Copying: 898/1024 [MB] (24 MBps) [2024-11-27T12:17:01.872Z] Copying: 923/1024 [MB] (24 MBps) [2024-11-27T12:17:02.811Z] Copying: 947/1024 [MB] (24 MBps) [2024-11-27T12:17:03.749Z] Copying: 971/1024 [MB] (23 MBps) [2024-11-27T12:17:04.687Z] Copying: 995/1024 [MB] (24 MBps) [2024-11-27T12:17:04.949Z] Copying: 1020/1024 [MB] (24 MBps) [2024-11-27T12:17:04.949Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 12:17:04.742390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.896 [2024-11-27 12:17:04.742470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:14.896 [2024-11-27 12:17:04.742489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:14.896 [2024-11-27 12:17:04.742500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.896 [2024-11-27 12:17:04.742522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:14.896 [2024-11-27 12:17:04.746872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.896 [2024-11-27 12:17:04.746903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:14.896 [2024-11-27 12:17:04.746923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:33:14.896 [2024-11-27 12:17:04.746933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.896 [2024-11-27 12:17:04.748846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.896 [2024-11-27 12:17:04.748881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:14.896 [2024-11-27 12:17:04.748893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:33:14.896 [2024-11-27 12:17:04.748903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.896 [2024-11-27 12:17:04.748932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.896 [2024-11-27 12:17:04.748943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:14.896 [2024-11-27 12:17:04.748954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:14.896 [2024-11-27 12:17:04.748963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.896 [2024-11-27 12:17:04.749023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.896 [2024-11-27 12:17:04.749035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:14.896 [2024-11-27 12:17:04.749044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:14.896 [2024-11-27 12:17:04.749054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.896 [2024-11-27 12:17:04.749067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:14.896 [2024-11-27 12:17:04.749081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:14.896 [2024-11-27 12:17:04.749619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.749997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:14.897 [2024-11-27 12:17:04.750108] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:14.897 [2024-11-27 12:17:04.750118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:33:14.897 [2024-11-27 12:17:04.750128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:14.897 [2024-11-27 12:17:04.750137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:14.897 [2024-11-27 12:17:04.750145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:14.897 [2024-11-27 12:17:04.750159] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:14.897 [2024-11-27 12:17:04.750168] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:14.897 [2024-11-27 12:17:04.750178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:14.897 [2024-11-27 12:17:04.750187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:14.897 [2024-11-27 12:17:04.750196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:14.897 [2024-11-27 12:17:04.750204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:14.897 [2024-11-27 12:17:04.750213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.897 [2024-11-27 12:17:04.750223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:14.897 [2024-11-27 12:17:04.750233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:33:14.897 [2024-11-27 12:17:04.750242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.769423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.897 [2024-11-27 12:17:04.769462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:14.897 [2024-11-27 12:17:04.769474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.195 ms 00:33:14.897 [2024-11-27 12:17:04.769484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.770062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.897 [2024-11-27 12:17:04.770073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:14.897 [2024-11-27 12:17:04.770083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:33:14.897 [2024-11-27 12:17:04.770092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.821819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.897 [2024-11-27 12:17:04.821852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:14.897 [2024-11-27 12:17:04.821865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.897 [2024-11-27 12:17:04.821875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.821935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.897 [2024-11-27 12:17:04.821945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:14.897 [2024-11-27 12:17:04.821955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.897 [2024-11-27 12:17:04.821965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.822020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.897 [2024-11-27 12:17:04.822038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:14.897 [2024-11-27 12:17:04.822048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.897 [2024-11-27 12:17:04.822057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.897 [2024-11-27 12:17:04.822073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:14.897 [2024-11-27 12:17:04.822083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:14.897 [2024-11-27 12:17:04.822097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:14.897 [2024-11-27 12:17:04.822107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:04.946870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:04.946925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:15.158 [2024-11-27 12:17:04.946940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:04.946951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:15.158 [2024-11-27 12:17:05.047125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:15.158 [2024-11-27 12:17:05.047283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:15.158 [2024-11-27 12:17:05.047393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:15.158 [2024-11-27 12:17:05.047539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:15.158 [2024-11-27 12:17:05.047608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:15.158 [2024-11-27 12:17:05.047684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.158 [2024-11-27 12:17:05.047759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:15.158 [2024-11-27 12:17:05.047771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.158 [2024-11-27 12:17:05.047780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.158 [2024-11-27 12:17:05.047926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 305.983 ms, result 0 00:33:16.538 00:33:16.538 00:33:16.538 12:17:06 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:16.538 [2024-11-27 12:17:06.389913] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:16.538 [2024-11-27 12:17:06.390044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85231 ] 00:33:16.538 [2024-11-27 12:17:06.569202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.798 [2024-11-27 12:17:06.699793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.367 [2024-11-27 12:17:07.121669] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:17.367 [2024-11-27 12:17:07.121750] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:17.367 [2024-11-27 12:17:07.286015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.286070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:17.367 [2024-11-27 12:17:07.286087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:17.367 [2024-11-27 12:17:07.286097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.286148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.286164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:17.367 [2024-11-27 12:17:07.286174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:17.367 [2024-11-27 12:17:07.286184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.286205] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:17.367 [2024-11-27 12:17:07.287171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:17.367 [2024-11-27 12:17:07.287198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.287210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:17.367 [2024-11-27 12:17:07.287220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:33:17.367 [2024-11-27 12:17:07.287229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.287637] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:17.367 [2024-11-27 12:17:07.287666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.287682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:17.367 [2024-11-27 12:17:07.287693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:17.367 [2024-11-27 12:17:07.287703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.287767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.287780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:17.367 [2024-11-27 12:17:07.287790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:17.367 [2024-11-27 12:17:07.287801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.288210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.367 [2024-11-27 12:17:07.288230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:17.367 [2024-11-27 12:17:07.288241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:33:17.367 [2024-11-27 12:17:07.288251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.367 [2024-11-27 12:17:07.288326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.288339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:17.368 [2024-11-27 12:17:07.288349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:17.368 [2024-11-27 12:17:07.288374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.288397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.288408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:17.368 [2024-11-27 12:17:07.288422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:17.368 [2024-11-27 12:17:07.288432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.288454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:17.368 [2024-11-27 12:17:07.294803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.294833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:17.368 [2024-11-27 12:17:07.294845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:33:17.368 [2024-11-27 12:17:07.294855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.294884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.294895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:17.368 [2024-11-27 12:17:07.294905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:17.368 [2024-11-27 12:17:07.294914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.294965] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:17.368 [2024-11-27 12:17:07.294992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:17.368 [2024-11-27 12:17:07.295030] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:17.368 [2024-11-27 12:17:07.295047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:17.368 [2024-11-27 12:17:07.295135] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:17.368 [2024-11-27 12:17:07.295148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:17.368 [2024-11-27 12:17:07.295161] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:17.368 [2024-11-27 12:17:07.295174] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:17.368 [2024-11-27 12:17:07.295211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:17.368 [2024-11-27 12:17:07.295220] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:17.368 [2024-11-27 12:17:07.295230] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:17.368 [2024-11-27 12:17:07.295240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.295249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:17.368 [2024-11-27 12:17:07.295260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:33:17.368 [2024-11-27 12:17:07.295270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.295336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.368 [2024-11-27 12:17:07.295346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:17.368 [2024-11-27 12:17:07.295368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:17.368 [2024-11-27 12:17:07.295382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.368 [2024-11-27 12:17:07.295471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:17.368 [2024-11-27 12:17:07.295485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:17.368 [2024-11-27 12:17:07.295496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:17.368 [2024-11-27 12:17:07.295524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:17.368 [2024-11-27 12:17:07.295552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.368 [2024-11-27 12:17:07.295572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:17.368 [2024-11-27 12:17:07.295581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:17.368 [2024-11-27 12:17:07.295590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.368 [2024-11-27 12:17:07.295599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:17.368 [2024-11-27 12:17:07.295609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:17.368 [2024-11-27 12:17:07.295627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:17.368 [2024-11-27 12:17:07.295645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:17.368 [2024-11-27 12:17:07.295672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:17.368 [2024-11-27 12:17:07.295698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:17.368 [2024-11-27 12:17:07.295724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:17.368 [2024-11-27 12:17:07.295749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:17.368 [2024-11-27 12:17:07.295774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.368 [2024-11-27 12:17:07.295790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:17.368 [2024-11-27 12:17:07.295798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:17.368 [2024-11-27 12:17:07.295807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.368 [2024-11-27 12:17:07.295816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:17.368 [2024-11-27 12:17:07.295824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:17.368 [2024-11-27 12:17:07.295832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:17.368 [2024-11-27 12:17:07.295848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:17.368 [2024-11-27 12:17:07.295858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295867] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:17.368 [2024-11-27 12:17:07.295877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:17.368 [2024-11-27 12:17:07.295886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.368 [2024-11-27 12:17:07.295908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:17.368 [2024-11-27 12:17:07.295916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:17.368 [2024-11-27 12:17:07.295925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:17.368 [2024-11-27 12:17:07.295933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:17.368 [2024-11-27 12:17:07.295942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:17.368 [2024-11-27 12:17:07.295951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:17.368 [2024-11-27 12:17:07.295962] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:17.368 [2024-11-27 12:17:07.295973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.368 [2024-11-27 12:17:07.295984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:17.368 [2024-11-27 12:17:07.295993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:17.368 [2024-11-27 12:17:07.296002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:17.368 [2024-11-27 12:17:07.296011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:17.368 [2024-11-27 12:17:07.296021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:17.368 [2024-11-27 12:17:07.296030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:17.368 [2024-11-27 12:17:07.296040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:17.368 [2024-11-27 12:17:07.296050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:17.368 [2024-11-27 12:17:07.296060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:17.369 [2024-11-27 12:17:07.296069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:17.369 [2024-11-27 12:17:07.296117] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:17.369 [2024-11-27 12:17:07.296128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:17.369 [2024-11-27 12:17:07.296148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:17.369 [2024-11-27 12:17:07.296158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:17.369 [2024-11-27 12:17:07.296169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:17.369 [2024-11-27 12:17:07.296179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.296189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:17.369 [2024-11-27 12:17:07.296199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:33:17.369 [2024-11-27 12:17:07.296208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.337828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.337866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:17.369 [2024-11-27 12:17:07.337880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.647 ms 00:33:17.369 [2024-11-27 12:17:07.337892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.337972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.337984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:17.369 [2024-11-27 12:17:07.338000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:17.369 [2024-11-27 12:17:07.338011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.402529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.402565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:17.369 [2024-11-27 12:17:07.402579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.566 ms 00:33:17.369 [2024-11-27 12:17:07.402590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.402631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.402643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:17.369 [2024-11-27 12:17:07.402654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:17.369 [2024-11-27 12:17:07.402665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.402792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.402806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:17.369 [2024-11-27 12:17:07.402818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:17.369 [2024-11-27 12:17:07.402829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.369 [2024-11-27 12:17:07.402958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.369 [2024-11-27 12:17:07.402972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:17.369 [2024-11-27 12:17:07.402985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:33:17.369 [2024-11-27 12:17:07.402996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.425392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.425426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:17.629 [2024-11-27 12:17:07.425440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.410 ms 00:33:17.629 [2024-11-27 12:17:07.425450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.425578] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:17.629 [2024-11-27 12:17:07.425594] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:17.629 [2024-11-27 12:17:07.425611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.425621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:17.629 [2024-11-27 12:17:07.425632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:17.629 [2024-11-27 12:17:07.425642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.436032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.436064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:17.629 [2024-11-27 12:17:07.436076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.389 ms 00:33:17.629 [2024-11-27 12:17:07.436086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.436202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.436214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:17.629 [2024-11-27 12:17:07.436225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:33:17.629 [2024-11-27 12:17:07.436241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.436289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.436302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:17.629 [2024-11-27 12:17:07.436324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:17.629 [2024-11-27 12:17:07.436334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.437012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.437037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:17.629 [2024-11-27 12:17:07.437049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:33:17.629 [2024-11-27 12:17:07.437061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.437091] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:17.629 [2024-11-27 12:17:07.437105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.437116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:17.629 [2024-11-27 12:17:07.437127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:17.629 [2024-11-27 12:17:07.437138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.450161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:17.629 [2024-11-27 12:17:07.450343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.450373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:17.629 [2024-11-27 12:17:07.450386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.206 ms 00:33:17.629 [2024-11-27 12:17:07.450397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.452263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.452290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:17.629 [2024-11-27 12:17:07.452301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:33:17.629 [2024-11-27 12:17:07.452311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.452419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.452432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:17.629 [2024-11-27 12:17:07.452444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:17.629 [2024-11-27 12:17:07.452453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.452481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.452499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:17.629 [2024-11-27 12:17:07.452509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:17.629 [2024-11-27 12:17:07.452519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.452558] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:17.629 [2024-11-27 12:17:07.452570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.452580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:17.629 [2024-11-27 12:17:07.452591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:17.629 [2024-11-27 12:17:07.452601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.488448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.488485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:17.629 [2024-11-27 12:17:07.488499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.885 ms 00:33:17.629 [2024-11-27 12:17:07.488510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.629 [2024-11-27 12:17:07.488588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.629 [2024-11-27 12:17:07.488600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:17.629 [2024-11-27 12:17:07.488612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:17.629 [2024-11-27 12:17:07.488622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.630 [2024-11-27 12:17:07.489958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 203.752 ms, result 0 00:33:19.008  [2024-11-27T12:17:09.997Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-27T12:17:10.936Z] Copying: 52/1024 [MB] (25 MBps) [2024-11-27T12:17:11.874Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-27T12:17:12.812Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-27T12:17:13.750Z] Copying: 129/1024 [MB] (26 MBps) [2024-11-27T12:17:15.125Z] Copying: 155/1024 [MB] (26 MBps) [2024-11-27T12:17:15.693Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-27T12:17:17.074Z] Copying: 207/1024 [MB] (26 MBps) [2024-11-27T12:17:18.012Z] Copying: 233/1024 [MB] (26 MBps) [2024-11-27T12:17:18.950Z] Copying: 260/1024 [MB] (26 MBps) [2024-11-27T12:17:19.888Z] Copying: 286/1024 [MB] (26 MBps) [2024-11-27T12:17:20.827Z] Copying: 312/1024 [MB] (26 MBps) [2024-11-27T12:17:21.820Z] Copying: 338/1024 [MB] (25 MBps) [2024-11-27T12:17:22.762Z] Copying: 364/1024 [MB] (25 MBps) [2024-11-27T12:17:23.702Z] Copying: 390/1024 [MB] (25 MBps) [2024-11-27T12:17:25.082Z] Copying: 416/1024 [MB] (25 MBps) [2024-11-27T12:17:26.020Z] Copying: 442/1024 [MB] (25 MBps) [2024-11-27T12:17:26.959Z] Copying: 467/1024 [MB] (25 MBps) [2024-11-27T12:17:27.897Z] Copying: 493/1024 [MB] (25 MBps) [2024-11-27T12:17:28.835Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-27T12:17:29.772Z] Copying: 545/1024 [MB] (25 MBps) [2024-11-27T12:17:30.711Z] Copying: 571/1024 [MB] (25 MBps) [2024-11-27T12:17:32.091Z] Copying: 597/1024 [MB] (26 MBps) [2024-11-27T12:17:33.030Z] Copying: 623/1024 [MB] (25 MBps) [2024-11-27T12:17:33.967Z] Copying: 649/1024 [MB] (26 MBps) [2024-11-27T12:17:34.906Z] Copying: 675/1024 [MB] (26 MBps) [2024-11-27T12:17:35.843Z] Copying: 702/1024 [MB] (26 MBps) [2024-11-27T12:17:36.782Z] Copying: 728/1024 [MB] (26 MBps) [2024-11-27T12:17:37.721Z] Copying: 754/1024 [MB] (25 MBps) [2024-11-27T12:17:38.660Z] Copying: 780/1024 [MB] (25 MBps) [2024-11-27T12:17:40.043Z] Copying: 805/1024 [MB] (25 MBps) [2024-11-27T12:17:40.979Z] Copying: 831/1024 [MB] (25 MBps) [2024-11-27T12:17:41.912Z] Copying: 857/1024 [MB] (25 MBps) [2024-11-27T12:17:42.845Z] Copying: 882/1024 [MB] (25 MBps) [2024-11-27T12:17:43.781Z] Copying: 909/1024 [MB] (26 MBps) [2024-11-27T12:17:44.720Z] Copying: 935/1024 [MB] (26 MBps) [2024-11-27T12:17:45.659Z] Copying: 962/1024 [MB] (26 MBps) [2024-11-27T12:17:47.040Z] Copying: 988/1024 [MB] (26 MBps) [2024-11-27T12:17:47.040Z] Copying: 1014/1024 [MB] (26 MBps) [2024-11-27T12:17:47.301Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-27 12:17:47.217854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.248 [2024-11-27 12:17:47.217930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:57.248 [2024-11-27 12:17:47.217956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:57.248 [2024-11-27 12:17:47.217966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.248 [2024-11-27 12:17:47.217997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:57.248 [2024-11-27 12:17:47.222798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.248 [2024-11-27 12:17:47.222829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:57.248 [2024-11-27 12:17:47.222842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:33:57.248 [2024-11-27 12:17:47.222852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.248 [2024-11-27 12:17:47.223059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.248 [2024-11-27 12:17:47.223074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:57.248 [2024-11-27 12:17:47.223085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:33:57.248 [2024-11-27 12:17:47.223095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.248 [2024-11-27 12:17:47.223129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.248 [2024-11-27 12:17:47.223140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:57.248 [2024-11-27 12:17:47.223150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:57.248 [2024-11-27 12:17:47.223160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.248 [2024-11-27 12:17:47.223225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.248 [2024-11-27 12:17:47.223236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:57.248 [2024-11-27 12:17:47.223246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:57.248 [2024-11-27 12:17:47.223255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.248 [2024-11-27 12:17:47.223270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:57.248 [2024-11-27 12:17:47.223284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:57.248 [2024-11-27 12:17:47.223982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:57.249 [2024-11-27 12:17:47.224577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:57.249 [2024-11-27 12:17:47.224587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:33:57.249 [2024-11-27 12:17:47.224598] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:57.249 [2024-11-27 12:17:47.224608] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:57.249 [2024-11-27 12:17:47.224617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:57.249 [2024-11-27 12:17:47.224633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:57.249 [2024-11-27 12:17:47.224643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:57.249 [2024-11-27 12:17:47.224653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:57.249 [2024-11-27 12:17:47.224662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:57.249 [2024-11-27 12:17:47.224670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:57.249 [2024-11-27 12:17:47.224678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:57.249 [2024-11-27 12:17:47.224687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.249 [2024-11-27 12:17:47.224696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:57.249 [2024-11-27 12:17:47.224706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:33:57.249 [2024-11-27 12:17:47.224719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.245399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.249 [2024-11-27 12:17:47.245432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:57.249 [2024-11-27 12:17:47.245445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.696 ms 00:33:57.249 [2024-11-27 12:17:47.245454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.246024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.249 [2024-11-27 12:17:47.246043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:57.249 [2024-11-27 12:17:47.246058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:33:57.249 [2024-11-27 12:17:47.246067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.296763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.249 [2024-11-27 12:17:47.296794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:57.249 [2024-11-27 12:17:47.296807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.249 [2024-11-27 12:17:47.296817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.296876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.249 [2024-11-27 12:17:47.296887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:57.249 [2024-11-27 12:17:47.296904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.249 [2024-11-27 12:17:47.296914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.296971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.249 [2024-11-27 12:17:47.296984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:57.249 [2024-11-27 12:17:47.296993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.249 [2024-11-27 12:17:47.297003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.249 [2024-11-27 12:17:47.297020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.249 [2024-11-27 12:17:47.297031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:57.249 [2024-11-27 12:17:47.297051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.249 [2024-11-27 12:17:47.297065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.422299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.422350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:57.510 [2024-11-27 12:17:47.422373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.422383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.521475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.521523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:57.510 [2024-11-27 12:17:47.521538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.521556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.521666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.521679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:57.510 [2024-11-27 12:17:47.521690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.521709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.521754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.521766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:57.510 [2024-11-27 12:17:47.521775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.521785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.521882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.521895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:57.510 [2024-11-27 12:17:47.521906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.521916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.521947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.521960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:57.510 [2024-11-27 12:17:47.521971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.521980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.522031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.522042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:57.510 [2024-11-27 12:17:47.522052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.522061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.522111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.510 [2024-11-27 12:17:47.522123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:57.510 [2024-11-27 12:17:47.522140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.510 [2024-11-27 12:17:47.522151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.510 [2024-11-27 12:17:47.522350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 304.923 ms, result 0 00:33:58.890 00:33:58.890 00:33:58.890 12:17:48 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:00.268 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:00.268 12:17:50 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:00.527 [2024-11-27 12:17:50.346088] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:00.527 [2024-11-27 12:17:50.346211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85664 ] 00:34:00.527 [2024-11-27 12:17:50.519589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.787 [2024-11-27 12:17:50.648622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.047 [2024-11-27 12:17:51.064157] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:01.047 [2024-11-27 12:17:51.064225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:01.309 [2024-11-27 12:17:51.228538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.228592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:01.309 [2024-11-27 12:17:51.228608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:01.309 [2024-11-27 12:17:51.228619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.228667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.228682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:01.309 [2024-11-27 12:17:51.228692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:01.309 [2024-11-27 12:17:51.228702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.228723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:01.309 [2024-11-27 12:17:51.229606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:01.309 [2024-11-27 12:17:51.229635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.229646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:01.309 [2024-11-27 12:17:51.229657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:34:01.309 [2024-11-27 12:17:51.229667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230007] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:01.309 [2024-11-27 12:17:51.230036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.230052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:01.309 [2024-11-27 12:17:51.230063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:01.309 [2024-11-27 12:17:51.230073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.230141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:01.309 [2024-11-27 12:17:51.230151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:34:01.309 [2024-11-27 12:17:51.230160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.230600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:01.309 [2024-11-27 12:17:51.230611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:34:01.309 [2024-11-27 12:17:51.230621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.230726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:01.309 [2024-11-27 12:17:51.230736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:34:01.309 [2024-11-27 12:17:51.230746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.230782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:01.309 [2024-11-27 12:17:51.230796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:01.309 [2024-11-27 12:17:51.230806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.230827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:01.309 [2024-11-27 12:17:51.236928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.236958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:01.309 [2024-11-27 12:17:51.236970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:34:01.309 [2024-11-27 12:17:51.236980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.237009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.237019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:01.309 [2024-11-27 12:17:51.237030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:01.309 [2024-11-27 12:17:51.237039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.237090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:01.309 [2024-11-27 12:17:51.237117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:01.309 [2024-11-27 12:17:51.237156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:01.309 [2024-11-27 12:17:51.237174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:01.309 [2024-11-27 12:17:51.237258] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:01.309 [2024-11-27 12:17:51.237271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:01.309 [2024-11-27 12:17:51.237283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:01.309 [2024-11-27 12:17:51.237296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237323] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:01.309 [2024-11-27 12:17:51.237333] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:01.309 [2024-11-27 12:17:51.237343] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:01.309 [2024-11-27 12:17:51.237352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:01.309 [2024-11-27 12:17:51.237404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.237414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:01.309 [2024-11-27 12:17:51.237425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:34:01.309 [2024-11-27 12:17:51.237435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.237501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.309 [2024-11-27 12:17:51.237511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:01.309 [2024-11-27 12:17:51.237521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:34:01.309 [2024-11-27 12:17:51.237535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.309 [2024-11-27 12:17:51.237621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:01.309 [2024-11-27 12:17:51.237635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:01.309 [2024-11-27 12:17:51.237646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:01.309 [2024-11-27 12:17:51.237675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:01.309 [2024-11-27 12:17:51.237712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:01.309 [2024-11-27 12:17:51.237732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:01.309 [2024-11-27 12:17:51.237742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:01.309 [2024-11-27 12:17:51.237751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:01.309 [2024-11-27 12:17:51.237760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:01.309 [2024-11-27 12:17:51.237769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:01.309 [2024-11-27 12:17:51.237787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:01.309 [2024-11-27 12:17:51.237805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:01.309 [2024-11-27 12:17:51.237832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:01.309 [2024-11-27 12:17:51.237859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:01.309 [2024-11-27 12:17:51.237885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:01.309 [2024-11-27 12:17:51.237912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:01.309 [2024-11-27 12:17:51.237930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:01.309 [2024-11-27 12:17:51.237938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:01.309 [2024-11-27 12:17:51.237946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:01.309 [2024-11-27 12:17:51.237955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:01.310 [2024-11-27 12:17:51.237964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:01.310 [2024-11-27 12:17:51.237972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:01.310 [2024-11-27 12:17:51.237980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:01.310 [2024-11-27 12:17:51.237989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:01.310 [2024-11-27 12:17:51.237997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.310 [2024-11-27 12:17:51.238005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:01.310 [2024-11-27 12:17:51.238014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:01.310 [2024-11-27 12:17:51.238023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.310 [2024-11-27 12:17:51.238032] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:01.310 [2024-11-27 12:17:51.238041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:01.310 [2024-11-27 12:17:51.238051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:01.310 [2024-11-27 12:17:51.238060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:01.310 [2024-11-27 12:17:51.238073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:01.310 [2024-11-27 12:17:51.238082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:01.310 [2024-11-27 12:17:51.238091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:01.310 [2024-11-27 12:17:51.238100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:01.310 [2024-11-27 12:17:51.238108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:01.310 [2024-11-27 12:17:51.238117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:01.310 [2024-11-27 12:17:51.238128] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:01.310 [2024-11-27 12:17:51.238139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:01.310 [2024-11-27 12:17:51.238160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:01.310 [2024-11-27 12:17:51.238170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:01.310 [2024-11-27 12:17:51.238179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:01.310 [2024-11-27 12:17:51.238189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:01.310 [2024-11-27 12:17:51.238198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:01.310 [2024-11-27 12:17:51.238207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:01.310 [2024-11-27 12:17:51.238216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:01.310 [2024-11-27 12:17:51.238226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:01.310 [2024-11-27 12:17:51.238236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:01.310 [2024-11-27 12:17:51.238283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:01.310 [2024-11-27 12:17:51.238293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:01.310 [2024-11-27 12:17:51.238312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:01.310 [2024-11-27 12:17:51.238323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:01.310 [2024-11-27 12:17:51.238334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:01.310 [2024-11-27 12:17:51.238344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.310 [2024-11-27 12:17:51.238354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:01.310 [2024-11-27 12:17:51.238377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:34:01.310 [2024-11-27 12:17:51.238386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.310 [2024-11-27 12:17:51.280656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.310 [2024-11-27 12:17:51.280693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:01.310 [2024-11-27 12:17:51.280707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.298 ms 00:34:01.310 [2024-11-27 12:17:51.280718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.310 [2024-11-27 12:17:51.280796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.310 [2024-11-27 12:17:51.280807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:01.310 [2024-11-27 12:17:51.280822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:01.310 [2024-11-27 12:17:51.280833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.359343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.359390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:01.571 [2024-11-27 12:17:51.359405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.582 ms 00:34:01.571 [2024-11-27 12:17:51.359417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.359465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.359477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:01.571 [2024-11-27 12:17:51.359488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:01.571 [2024-11-27 12:17:51.359498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.359633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.359647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:01.571 [2024-11-27 12:17:51.359659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:01.571 [2024-11-27 12:17:51.359670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.359799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.359815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:01.571 [2024-11-27 12:17:51.359826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:34:01.571 [2024-11-27 12:17:51.359838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.382742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.382776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:01.571 [2024-11-27 12:17:51.382791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.917 ms 00:34:01.571 [2024-11-27 12:17:51.382802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.382938] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:01.571 [2024-11-27 12:17:51.382954] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:01.571 [2024-11-27 12:17:51.382971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.382982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:01.571 [2024-11-27 12:17:51.382993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:01.571 [2024-11-27 12:17:51.383002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.393468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.393499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:01.571 [2024-11-27 12:17:51.393512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.464 ms 00:34:01.571 [2024-11-27 12:17:51.393523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.393642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.393654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:01.571 [2024-11-27 12:17:51.393666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:34:01.571 [2024-11-27 12:17:51.393681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.393757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.393770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:01.571 [2024-11-27 12:17:51.393793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:01.571 [2024-11-27 12:17:51.393803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.394538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.394563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:01.571 [2024-11-27 12:17:51.394574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:34:01.571 [2024-11-27 12:17:51.394585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.394613] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:01.571 [2024-11-27 12:17:51.394627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.394638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:01.571 [2024-11-27 12:17:51.394649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:01.571 [2024-11-27 12:17:51.394659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.407948] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:01.571 [2024-11-27 12:17:51.408139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.408153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:01.571 [2024-11-27 12:17:51.408165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.478 ms 00:34:01.571 [2024-11-27 12:17:51.408176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.410137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.410169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:01.571 [2024-11-27 12:17:51.410180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.942 ms 00:34:01.571 [2024-11-27 12:17:51.410190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.571 [2024-11-27 12:17:51.410293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.571 [2024-11-27 12:17:51.410307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:01.571 [2024-11-27 12:17:51.410319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:34:01.571 [2024-11-27 12:17:51.410329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.572 [2024-11-27 12:17:51.410492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.572 [2024-11-27 12:17:51.410515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:01.572 [2024-11-27 12:17:51.410527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:34:01.572 [2024-11-27 12:17:51.410538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.572 [2024-11-27 12:17:51.410583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:01.572 [2024-11-27 12:17:51.410596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.572 [2024-11-27 12:17:51.410607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:01.572 [2024-11-27 12:17:51.410618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:01.572 [2024-11-27 12:17:51.410629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.572 [2024-11-27 12:17:51.446727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.572 [2024-11-27 12:17:51.446765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:01.572 [2024-11-27 12:17:51.446778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.132 ms 00:34:01.572 [2024-11-27 12:17:51.446789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.572 [2024-11-27 12:17:51.446863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:01.572 [2024-11-27 12:17:51.446875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:01.572 [2024-11-27 12:17:51.446886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:01.572 [2024-11-27 12:17:51.446897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:01.572 [2024-11-27 12:17:51.448291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.599 ms, result 0 00:34:02.511  [2024-11-27T12:17:53.504Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T12:17:54.883Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-27T12:17:55.843Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-27T12:17:56.820Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-27T12:17:57.759Z] Copying: 122/1024 [MB] (24 MBps) [2024-11-27T12:17:58.698Z] Copying: 147/1024 [MB] (24 MBps) [2024-11-27T12:17:59.635Z] Copying: 171/1024 [MB] (24 MBps) [2024-11-27T12:18:00.573Z] Copying: 196/1024 [MB] (24 MBps) [2024-11-27T12:18:01.511Z] Copying: 220/1024 [MB] (24 MBps) [2024-11-27T12:18:02.449Z] Copying: 244/1024 [MB] (24 MBps) [2024-11-27T12:18:03.828Z] Copying: 269/1024 [MB] (24 MBps) [2024-11-27T12:18:04.766Z] Copying: 293/1024 [MB] (24 MBps) [2024-11-27T12:18:05.703Z] Copying: 317/1024 [MB] (23 MBps) [2024-11-27T12:18:06.640Z] Copying: 341/1024 [MB] (24 MBps) [2024-11-27T12:18:07.578Z] Copying: 365/1024 [MB] (24 MBps) [2024-11-27T12:18:08.515Z] Copying: 390/1024 [MB] (24 MBps) [2024-11-27T12:18:09.453Z] Copying: 414/1024 [MB] (24 MBps) [2024-11-27T12:18:10.835Z] Copying: 438/1024 [MB] (23 MBps) [2024-11-27T12:18:11.774Z] Copying: 462/1024 [MB] (24 MBps) [2024-11-27T12:18:12.713Z] Copying: 487/1024 [MB] (24 MBps) [2024-11-27T12:18:13.650Z] Copying: 512/1024 [MB] (24 MBps) [2024-11-27T12:18:14.587Z] Copying: 537/1024 [MB] (24 MBps) [2024-11-27T12:18:15.527Z] Copying: 562/1024 [MB] (24 MBps) [2024-11-27T12:18:16.465Z] Copying: 587/1024 [MB] (25 MBps) [2024-11-27T12:18:17.846Z] Copying: 612/1024 [MB] (24 MBps) [2024-11-27T12:18:18.784Z] Copying: 636/1024 [MB] (24 MBps) [2024-11-27T12:18:19.733Z] Copying: 661/1024 [MB] (24 MBps) [2024-11-27T12:18:20.671Z] Copying: 685/1024 [MB] (24 MBps) [2024-11-27T12:18:21.610Z] Copying: 710/1024 [MB] (24 MBps) [2024-11-27T12:18:22.550Z] Copying: 734/1024 [MB] (24 MBps) [2024-11-27T12:18:23.490Z] Copying: 759/1024 [MB] (24 MBps) [2024-11-27T12:18:24.428Z] Copying: 783/1024 [MB] (24 MBps) [2024-11-27T12:18:25.808Z] Copying: 808/1024 [MB] (24 MBps) [2024-11-27T12:18:26.787Z] Copying: 833/1024 [MB] (24 MBps) [2024-11-27T12:18:27.753Z] Copying: 857/1024 [MB] (24 MBps) [2024-11-27T12:18:28.691Z] Copying: 882/1024 [MB] (24 MBps) [2024-11-27T12:18:29.629Z] Copying: 906/1024 [MB] (24 MBps) [2024-11-27T12:18:30.569Z] Copying: 930/1024 [MB] (24 MBps) [2024-11-27T12:18:31.507Z] Copying: 955/1024 [MB] (24 MBps) [2024-11-27T12:18:32.443Z] Copying: 981/1024 [MB] (25 MBps) [2024-11-27T12:18:33.821Z] Copying: 1005/1024 [MB] (24 MBps) [2024-11-27T12:18:34.082Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-27T12:18:34.082Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 12:18:33.946425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.029 [2024-11-27 12:18:33.946645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:44.029 [2024-11-27 12:18:33.946672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:44.029 [2024-11-27 12:18:33.946684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.029 [2024-11-27 12:18:33.947880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:44.029 [2024-11-27 12:18:33.953864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.029 [2024-11-27 12:18:33.953903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:44.029 [2024-11-27 12:18:33.953917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.955 ms 00:34:44.029 [2024-11-27 12:18:33.953928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.029 [2024-11-27 12:18:33.961272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.029 [2024-11-27 12:18:33.961309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:44.029 [2024-11-27 12:18:33.961322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.435 ms 00:34:44.029 [2024-11-27 12:18:33.961332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.029 [2024-11-27 12:18:33.961387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.029 [2024-11-27 12:18:33.961398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:44.029 [2024-11-27 12:18:33.961409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:44.029 [2024-11-27 12:18:33.961419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.029 [2024-11-27 12:18:33.961477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.029 [2024-11-27 12:18:33.961492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:44.029 [2024-11-27 12:18:33.961503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:44.029 [2024-11-27 12:18:33.961512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.029 [2024-11-27 12:18:33.961527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:44.029 [2024-11-27 12:18:33.961540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:34:44.029 [2024-11-27 12:18:33.961553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:44.029 [2024-11-27 12:18:33.961994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:44.030 [2024-11-27 12:18:33.962686] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:44.030 [2024-11-27 12:18:33.962696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:34:44.030 [2024-11-27 12:18:33.962707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:34:44.030 [2024-11-27 12:18:33.962717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:34:44.030 [2024-11-27 12:18:33.962727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:34:44.030 [2024-11-27 12:18:33.962737] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:44.030 [2024-11-27 12:18:33.962751] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:44.030 [2024-11-27 12:18:33.962761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:44.030 [2024-11-27 12:18:33.962771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:44.030 [2024-11-27 12:18:33.962780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:44.030 [2024-11-27 12:18:33.962789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:44.030 [2024-11-27 12:18:33.962799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.030 [2024-11-27 12:18:33.962809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:44.030 [2024-11-27 12:18:33.962819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:34:44.030 [2024-11-27 12:18:33.962829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.030 [2024-11-27 12:18:33.982189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.030 [2024-11-27 12:18:33.982223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:44.030 [2024-11-27 12:18:33.982241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.372 ms 00:34:44.030 [2024-11-27 12:18:33.982251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.030 [2024-11-27 12:18:33.982826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:44.030 [2024-11-27 12:18:33.982844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:44.030 [2024-11-27 12:18:33.982855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:34:44.030 [2024-11-27 12:18:33.982865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.030 [2024-11-27 12:18:34.032740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.031 [2024-11-27 12:18:34.032779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:44.031 [2024-11-27 12:18:34.032790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.031 [2024-11-27 12:18:34.032817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.031 [2024-11-27 12:18:34.032873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.031 [2024-11-27 12:18:34.032884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:44.031 [2024-11-27 12:18:34.032895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.031 [2024-11-27 12:18:34.032905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.031 [2024-11-27 12:18:34.032980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.031 [2024-11-27 12:18:34.032993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:44.031 [2024-11-27 12:18:34.033007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.031 [2024-11-27 12:18:34.033016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.031 [2024-11-27 12:18:34.033031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.031 [2024-11-27 12:18:34.033042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:44.031 [2024-11-27 12:18:34.033051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.031 [2024-11-27 12:18:34.033060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.290 [2024-11-27 12:18:34.149566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.290 [2024-11-27 12:18:34.149626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:44.290 [2024-11-27 12:18:34.149640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.290 [2024-11-27 12:18:34.149667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.290 [2024-11-27 12:18:34.244608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.290 [2024-11-27 12:18:34.244662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:44.290 [2024-11-27 12:18:34.244675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.290 [2024-11-27 12:18:34.244685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.290 [2024-11-27 12:18:34.244785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.290 [2024-11-27 12:18:34.244798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:44.290 [2024-11-27 12:18:34.244808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.290 [2024-11-27 12:18:34.244823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.290 [2024-11-27 12:18:34.244861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.290 [2024-11-27 12:18:34.244871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:44.290 [2024-11-27 12:18:34.244882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.290 [2024-11-27 12:18:34.244891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.291 [2024-11-27 12:18:34.244993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.291 [2024-11-27 12:18:34.245006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:44.291 [2024-11-27 12:18:34.245016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.291 [2024-11-27 12:18:34.245026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.291 [2024-11-27 12:18:34.245075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.291 [2024-11-27 12:18:34.245088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:44.291 [2024-11-27 12:18:34.245099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.291 [2024-11-27 12:18:34.245108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.291 [2024-11-27 12:18:34.245147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.291 [2024-11-27 12:18:34.245158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:44.291 [2024-11-27 12:18:34.245168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.291 [2024-11-27 12:18:34.245178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.291 [2024-11-27 12:18:34.245225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:44.291 [2024-11-27 12:18:34.245236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:44.291 [2024-11-27 12:18:34.245246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:44.291 [2024-11-27 12:18:34.245255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:44.291 [2024-11-27 12:18:34.245375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 301.719 ms, result 0 00:34:45.670 00:34:45.670 00:34:45.930 12:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:45.930 [2024-11-27 12:18:35.817729] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:45.930 [2024-11-27 12:18:35.817866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86111 ] 00:34:46.190 [2024-11-27 12:18:36.002234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.190 [2024-11-27 12:18:36.118586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.450 [2024-11-27 12:18:36.453002] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:46.450 [2024-11-27 12:18:36.453065] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:46.711 [2024-11-27 12:18:36.613335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.613395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:46.711 [2024-11-27 12:18:36.613410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:46.711 [2024-11-27 12:18:36.613419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.613466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.613480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:46.711 [2024-11-27 12:18:36.613490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:34:46.711 [2024-11-27 12:18:36.613500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.613520] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:46.711 [2024-11-27 12:18:36.614504] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:46.711 [2024-11-27 12:18:36.614528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.614539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:46.711 [2024-11-27 12:18:36.614550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:34:46.711 [2024-11-27 12:18:36.614560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.614862] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:46.711 [2024-11-27 12:18:36.614883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.614899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:46.711 [2024-11-27 12:18:36.614910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:46.711 [2024-11-27 12:18:36.614919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.614993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.615006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:46.711 [2024-11-27 12:18:36.615017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:46.711 [2024-11-27 12:18:36.615027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.615456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.615470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:46.711 [2024-11-27 12:18:36.615480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:34:46.711 [2024-11-27 12:18:36.615491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.615563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.711 [2024-11-27 12:18:36.615576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:46.711 [2024-11-27 12:18:36.615596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:46.711 [2024-11-27 12:18:36.615606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.711 [2024-11-27 12:18:36.615628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.712 [2024-11-27 12:18:36.615639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:46.712 [2024-11-27 12:18:36.615652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:46.712 [2024-11-27 12:18:36.615661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.712 [2024-11-27 12:18:36.615682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:46.712 [2024-11-27 12:18:36.620675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.712 [2024-11-27 12:18:36.620702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:46.712 [2024-11-27 12:18:36.620713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.005 ms 00:34:46.712 [2024-11-27 12:18:36.620722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.712 [2024-11-27 12:18:36.620751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.712 [2024-11-27 12:18:36.620760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:46.712 [2024-11-27 12:18:36.620770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:46.712 [2024-11-27 12:18:36.620779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.712 [2024-11-27 12:18:36.620830] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:46.712 [2024-11-27 12:18:36.620862] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:46.712 [2024-11-27 12:18:36.620897] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:46.712 [2024-11-27 12:18:36.620913] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:46.712 [2024-11-27 12:18:36.620994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:46.712 [2024-11-27 12:18:36.621005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:46.712 [2024-11-27 12:18:36.621018] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:46.712 [2024-11-27 12:18:36.621047] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621058] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621072] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:46.712 [2024-11-27 12:18:36.621098] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:46.712 [2024-11-27 12:18:36.621107] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:46.712 [2024-11-27 12:18:36.621117] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:46.712 [2024-11-27 12:18:36.621127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.712 [2024-11-27 12:18:36.621136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:46.712 [2024-11-27 12:18:36.621146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:34:46.712 [2024-11-27 12:18:36.621156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.712 [2024-11-27 12:18:36.621225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.712 [2024-11-27 12:18:36.621235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:46.712 [2024-11-27 12:18:36.621245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:46.712 [2024-11-27 12:18:36.621258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.712 [2024-11-27 12:18:36.621350] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:46.712 [2024-11-27 12:18:36.621378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:46.712 [2024-11-27 12:18:36.621389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:46.712 [2024-11-27 12:18:36.621420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:46.712 [2024-11-27 12:18:36.621448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:46.712 [2024-11-27 12:18:36.621467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:46.712 [2024-11-27 12:18:36.621476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:46.712 [2024-11-27 12:18:36.621485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:46.712 [2024-11-27 12:18:36.621494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:46.712 [2024-11-27 12:18:36.621503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:46.712 [2024-11-27 12:18:36.621522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:46.712 [2024-11-27 12:18:36.621540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:46.712 [2024-11-27 12:18:36.621567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:46.712 [2024-11-27 12:18:36.621594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:46.712 [2024-11-27 12:18:36.621620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:46.712 [2024-11-27 12:18:36.621648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:46.712 [2024-11-27 12:18:36.621676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:46.712 [2024-11-27 12:18:36.621693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:46.712 [2024-11-27 12:18:36.621711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:46.712 [2024-11-27 12:18:36.621721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:46.712 [2024-11-27 12:18:36.621730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:46.712 [2024-11-27 12:18:36.621740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:46.712 [2024-11-27 12:18:36.621749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:46.712 [2024-11-27 12:18:36.621767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:46.712 [2024-11-27 12:18:36.621776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621785] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:46.712 [2024-11-27 12:18:36.621795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:46.712 [2024-11-27 12:18:36.621804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:46.712 [2024-11-27 12:18:36.621827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:46.712 [2024-11-27 12:18:36.621836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:46.712 [2024-11-27 12:18:36.621844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:46.712 [2024-11-27 12:18:36.621853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:46.712 [2024-11-27 12:18:36.621862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:46.712 [2024-11-27 12:18:36.621871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:46.712 [2024-11-27 12:18:36.621881] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:46.712 [2024-11-27 12:18:36.621894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:46.712 [2024-11-27 12:18:36.621905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:46.712 [2024-11-27 12:18:36.621915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:46.712 [2024-11-27 12:18:36.621925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:46.712 [2024-11-27 12:18:36.621935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:46.712 [2024-11-27 12:18:36.621944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:46.712 [2024-11-27 12:18:36.621954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:46.712 [2024-11-27 12:18:36.621964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:46.712 [2024-11-27 12:18:36.621974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:46.712 [2024-11-27 12:18:36.621984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:46.712 [2024-11-27 12:18:36.621994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:46.712 [2024-11-27 12:18:36.622004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:46.713 [2024-11-27 12:18:36.622014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:46.713 [2024-11-27 12:18:36.622023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:46.713 [2024-11-27 12:18:36.622034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:46.713 [2024-11-27 12:18:36.622045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:46.713 [2024-11-27 12:18:36.622056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:46.713 [2024-11-27 12:18:36.622067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:46.713 [2024-11-27 12:18:36.622077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:46.713 [2024-11-27 12:18:36.622087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:46.713 [2024-11-27 12:18:36.622097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:46.713 [2024-11-27 12:18:36.622108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.622118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:46.713 [2024-11-27 12:18:36.622127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:34:46.713 [2024-11-27 12:18:36.622137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.654590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.654622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:46.713 [2024-11-27 12:18:36.654652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.464 ms 00:34:46.713 [2024-11-27 12:18:36.654662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.654736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.654747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:46.713 [2024-11-27 12:18:36.654761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:46.713 [2024-11-27 12:18:36.654771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.727022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.727057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:46.713 [2024-11-27 12:18:36.727072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.316 ms 00:34:46.713 [2024-11-27 12:18:36.727082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.727126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.727138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:46.713 [2024-11-27 12:18:36.727148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:46.713 [2024-11-27 12:18:36.727157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.727280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.727293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:46.713 [2024-11-27 12:18:36.727304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:46.713 [2024-11-27 12:18:36.727313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.727470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.727484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:46.713 [2024-11-27 12:18:36.727495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:34:46.713 [2024-11-27 12:18:36.727505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.745023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.745053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:46.713 [2024-11-27 12:18:36.745082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.525 ms 00:34:46.713 [2024-11-27 12:18:36.745092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.745218] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:46.713 [2024-11-27 12:18:36.745234] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:46.713 [2024-11-27 12:18:36.745250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.745260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:46.713 [2024-11-27 12:18:36.745270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:46.713 [2024-11-27 12:18:36.745280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.755945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.755973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:46.713 [2024-11-27 12:18:36.755984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.665 ms 00:34:46.713 [2024-11-27 12:18:36.756010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.756123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.756134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:46.713 [2024-11-27 12:18:36.756144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:34:46.713 [2024-11-27 12:18:36.756158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.756207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.756219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:46.713 [2024-11-27 12:18:36.756229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:46.713 [2024-11-27 12:18:36.756248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.756917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.756939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:46.713 [2024-11-27 12:18:36.756951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:34:46.713 [2024-11-27 12:18:36.756960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.713 [2024-11-27 12:18:36.756984] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:46.713 [2024-11-27 12:18:36.756997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.713 [2024-11-27 12:18:36.757007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:46.713 [2024-11-27 12:18:36.757017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:46.713 [2024-11-27 12:18:36.757027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.769044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:46.973 [2024-11-27 12:18:36.769230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.769244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:46.973 [2024-11-27 12:18:36.769256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:34:46.973 [2024-11-27 12:18:36.769266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.771180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.771207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:46.973 [2024-11-27 12:18:36.771219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:34:46.973 [2024-11-27 12:18:36.771229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.771305] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:46.973 [2024-11-27 12:18:36.771699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.771715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:46.973 [2024-11-27 12:18:36.771726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:34:46.973 [2024-11-27 12:18:36.771736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.771768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.771779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:46.973 [2024-11-27 12:18:36.771789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:46.973 [2024-11-27 12:18:36.771799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.771831] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:46.973 [2024-11-27 12:18:36.771843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.771853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:46.973 [2024-11-27 12:18:36.771862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:46.973 [2024-11-27 12:18:36.771871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.807352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.807393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:46.973 [2024-11-27 12:18:36.807422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.520 ms 00:34:46.973 [2024-11-27 12:18:36.807433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.807508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.973 [2024-11-27 12:18:36.807521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:46.973 [2024-11-27 12:18:36.807532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:46.973 [2024-11-27 12:18:36.807542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.973 [2024-11-27 12:18:36.808622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 195.108 ms, result 0 00:34:48.353  [2024-11-27T12:18:39.343Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-27T12:18:40.281Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-27T12:18:41.220Z] Copying: 76/1024 [MB] (25 MBps) [2024-11-27T12:18:42.158Z] Copying: 102/1024 [MB] (25 MBps) [2024-11-27T12:18:43.096Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-27T12:18:44.036Z] Copying: 154/1024 [MB] (25 MBps) [2024-11-27T12:18:45.416Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-27T12:18:46.355Z] Copying: 205/1024 [MB] (25 MBps) [2024-11-27T12:18:47.295Z] Copying: 231/1024 [MB] (25 MBps) [2024-11-27T12:18:48.232Z] Copying: 257/1024 [MB] (26 MBps) [2024-11-27T12:18:49.170Z] Copying: 284/1024 [MB] (26 MBps) [2024-11-27T12:18:50.107Z] Copying: 310/1024 [MB] (26 MBps) [2024-11-27T12:18:51.046Z] Copying: 334/1024 [MB] (24 MBps) [2024-11-27T12:18:52.425Z] Copying: 358/1024 [MB] (23 MBps) [2024-11-27T12:18:53.364Z] Copying: 382/1024 [MB] (23 MBps) [2024-11-27T12:18:54.302Z] Copying: 405/1024 [MB] (23 MBps) [2024-11-27T12:18:55.239Z] Copying: 429/1024 [MB] (24 MBps) [2024-11-27T12:18:56.177Z] Copying: 454/1024 [MB] (24 MBps) [2024-11-27T12:18:57.115Z] Copying: 478/1024 [MB] (23 MBps) [2024-11-27T12:18:58.100Z] Copying: 502/1024 [MB] (24 MBps) [2024-11-27T12:18:59.067Z] Copying: 525/1024 [MB] (23 MBps) [2024-11-27T12:19:00.002Z] Copying: 549/1024 [MB] (23 MBps) [2024-11-27T12:19:01.380Z] Copying: 572/1024 [MB] (23 MBps) [2024-11-27T12:19:02.319Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-27T12:19:03.259Z] Copying: 620/1024 [MB] (23 MBps) [2024-11-27T12:19:04.199Z] Copying: 644/1024 [MB] (23 MBps) [2024-11-27T12:19:05.136Z] Copying: 667/1024 [MB] (23 MBps) [2024-11-27T12:19:06.074Z] Copying: 691/1024 [MB] (23 MBps) [2024-11-27T12:19:07.012Z] Copying: 715/1024 [MB] (23 MBps) [2024-11-27T12:19:08.392Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-27T12:19:09.330Z] Copying: 763/1024 [MB] (23 MBps) [2024-11-27T12:19:10.268Z] Copying: 787/1024 [MB] (23 MBps) [2024-11-27T12:19:11.207Z] Copying: 810/1024 [MB] (23 MBps) [2024-11-27T12:19:12.145Z] Copying: 834/1024 [MB] (23 MBps) [2024-11-27T12:19:13.082Z] Copying: 858/1024 [MB] (24 MBps) [2024-11-27T12:19:14.021Z] Copying: 882/1024 [MB] (24 MBps) [2024-11-27T12:19:15.399Z] Copying: 906/1024 [MB] (24 MBps) [2024-11-27T12:19:15.965Z] Copying: 930/1024 [MB] (23 MBps) [2024-11-27T12:19:17.345Z] Copying: 954/1024 [MB] (23 MBps) [2024-11-27T12:19:18.284Z] Copying: 978/1024 [MB] (23 MBps) [2024-11-27T12:19:19.224Z] Copying: 1002/1024 [MB] (23 MBps) [2024-11-27T12:19:19.224Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 12:19:19.083942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.171 [2024-11-27 12:19:19.084026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:29.171 [2024-11-27 12:19:19.084047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:29.171 [2024-11-27 12:19:19.084059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.171 [2024-11-27 12:19:19.084085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:29.171 [2024-11-27 12:19:19.089256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.171 [2024-11-27 12:19:19.089439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:29.171 [2024-11-27 12:19:19.089529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.156 ms 00:35:29.171 [2024-11-27 12:19:19.089583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.171 [2024-11-27 12:19:19.089842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.171 [2024-11-27 12:19:19.090041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:29.171 [2024-11-27 12:19:19.090088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:35:29.171 [2024-11-27 12:19:19.090123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.171 [2024-11-27 12:19:19.090187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.171 [2024-11-27 12:19:19.090224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:29.171 [2024-11-27 12:19:19.090260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:29.171 [2024-11-27 12:19:19.090507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.171 [2024-11-27 12:19:19.090608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.171 [2024-11-27 12:19:19.090652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:29.171 [2024-11-27 12:19:19.090895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:35:29.171 [2024-11-27 12:19:19.090937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.171 [2024-11-27 12:19:19.090984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:29.171 [2024-11-27 12:19:19.091027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:29.171 [2024-11-27 12:19:19.091273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:29.171 [2024-11-27 12:19:19.091290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:29.171 [2024-11-27 12:19:19.091303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:29.171 [2024-11-27 12:19:19.091316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:29.171 [2024-11-27 12:19:19.091330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:29.171 [2024-11-27 12:19:19.091343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.091988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:29.172 [2024-11-27 12:19:19.092786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:29.173 [2024-11-27 12:19:19.092864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:29.173 [2024-11-27 12:19:19.092876] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9518d22f-ff7b-4dae-936b-af48dab8ee92 00:35:29.173 [2024-11-27 12:19:19.092889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:29.173 [2024-11-27 12:19:19.092900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:35:29.173 [2024-11-27 12:19:19.092911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:35:29.173 [2024-11-27 12:19:19.092928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:35:29.173 [2024-11-27 12:19:19.092939] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:29.173 [2024-11-27 12:19:19.092959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:29.173 [2024-11-27 12:19:19.092970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:29.173 [2024-11-27 12:19:19.092981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:29.173 [2024-11-27 12:19:19.092991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:29.173 [2024-11-27 12:19:19.093003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.173 [2024-11-27 12:19:19.093014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:29.173 [2024-11-27 12:19:19.093026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:35:29.173 [2024-11-27 12:19:19.093037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.113438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.173 [2024-11-27 12:19:19.113482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:29.173 [2024-11-27 12:19:19.113505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.408 ms 00:35:29.173 [2024-11-27 12:19:19.113518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.114110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:29.173 [2024-11-27 12:19:19.114133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:29.173 [2024-11-27 12:19:19.114146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:35:29.173 [2024-11-27 12:19:19.114158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.164672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.173 [2024-11-27 12:19:19.164717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:29.173 [2024-11-27 12:19:19.164732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.173 [2024-11-27 12:19:19.164745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.164808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.173 [2024-11-27 12:19:19.164822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:29.173 [2024-11-27 12:19:19.164834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.173 [2024-11-27 12:19:19.164847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.164911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.173 [2024-11-27 12:19:19.164934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:29.173 [2024-11-27 12:19:19.164946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.173 [2024-11-27 12:19:19.164958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.173 [2024-11-27 12:19:19.164978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.173 [2024-11-27 12:19:19.164990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:29.173 [2024-11-27 12:19:19.165001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.173 [2024-11-27 12:19:19.165013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.292193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.292256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:29.432 [2024-11-27 12:19:19.292273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.292287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.393923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.393983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:29.432 [2024-11-27 12:19:19.394001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:29.432 [2024-11-27 12:19:19.394167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:29.432 [2024-11-27 12:19:19.394268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:29.432 [2024-11-27 12:19:19.394432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:29.432 [2024-11-27 12:19:19.394516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:29.432 [2024-11-27 12:19:19.394605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:29.432 [2024-11-27 12:19:19.394690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:29.432 [2024-11-27 12:19:19.394702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:29.432 [2024-11-27 12:19:19.394713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:29.432 [2024-11-27 12:19:19.394870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 311.383 ms, result 0 00:35:30.811 00:35:30.811 00:35:30.811 12:19:20 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:32.192 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:32.192 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:32.192 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:32.192 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:32.452 Process with pid 84528 is not found 00:35:32.452 Remove shared memory files 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84528 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84528 ']' 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84528 00:35:32.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84528) - No such process 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84528 is not found' 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_band_md /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_l2p_l1 /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_l2p_l2 /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_l2p_l2_ctx /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_nvc_md /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_p2l_pool /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_sb /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_sb_shm /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_trim_bitmap /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_trim_log /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_trim_md /dev/hugepages/ftl_9518d22f-ff7b-4dae-936b-af48dab8ee92_vmap 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:32.452 00:35:32.452 real 3m21.311s 00:35:32.452 user 3m8.039s 00:35:32.452 sys 0m14.618s 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.452 12:19:22 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:32.452 ************************************ 00:35:32.452 END TEST ftl_restore_fast 00:35:32.452 ************************************ 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@14 -- # killprocess 76643 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 76643 ']' 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@958 -- # kill -0 76643 00:35:32.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76643) - No such process 00:35:32.452 Process with pid 76643 is not found 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76643 is not found' 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86600 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:32.452 12:19:22 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86600 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@835 -- # '[' -z 86600 ']' 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.452 12:19:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:32.712 [2024-11-27 12:19:22.512237] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:35:32.712 [2024-11-27 12:19:22.512376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86600 ] 00:35:32.712 [2024-11-27 12:19:22.693930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.971 [2024-11-27 12:19:22.827900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.910 12:19:23 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.910 12:19:23 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:33.910 12:19:23 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:34.168 nvme0n1 00:35:34.168 12:19:24 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:34.168 12:19:24 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:34.168 12:19:24 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:34.428 12:19:24 ftl -- ftl/common.sh@28 -- # stores=ee1e7619-443c-44c5-99bc-2e2f5c00ecba 00:35:34.428 12:19:24 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:34.428 12:19:24 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee1e7619-443c-44c5-99bc-2e2f5c00ecba 00:35:34.689 12:19:24 ftl -- ftl/ftl.sh@23 -- # killprocess 86600 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 86600 ']' 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@958 -- # kill -0 86600 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@959 -- # uname 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86600 00:35:34.689 killing process with pid 86600 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86600' 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@973 -- # kill 86600 00:35:34.689 12:19:24 ftl -- common/autotest_common.sh@978 -- # wait 86600 00:35:37.229 12:19:27 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:37.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.489 Waiting for block devices as requested 00:35:37.750 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.750 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.750 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:38.010 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:43.352 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:43.352 12:19:33 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:43.352 Remove shared memory files 00:35:43.352 12:19:33 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:43.352 12:19:33 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:43.352 12:19:33 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:43.352 12:19:33 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:43.352 12:19:33 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:43.352 12:19:33 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:43.352 ************************************ 00:35:43.352 END TEST ftl 00:35:43.352 ************************************ 00:35:43.352 00:35:43.352 real 14m55.991s 00:35:43.352 user 17m18.566s 00:35:43.352 sys 1m49.128s 00:35:43.352 12:19:33 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.352 12:19:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:43.352 12:19:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:43.352 12:19:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:43.352 12:19:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:43.352 12:19:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:43.353 12:19:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:43.353 12:19:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:43.353 12:19:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:43.353 12:19:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:43.353 12:19:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:43.353 12:19:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:43.353 12:19:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.353 12:19:33 -- common/autotest_common.sh@10 -- # set +x 00:35:43.353 12:19:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:43.353 12:19:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:43.353 12:19:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:43.353 12:19:33 -- common/autotest_common.sh@10 -- # set +x 00:35:45.888 INFO: APP EXITING 00:35:45.888 INFO: killing all VMs 00:35:45.888 INFO: killing vhost app 00:35:45.888 INFO: EXIT DONE 00:35:46.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:46.407 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:46.407 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:46.666 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:46.666 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:47.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:47.494 Cleaning 00:35:47.494 Removing: /var/run/dpdk/spdk0/config 00:35:47.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:47.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:47.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:47.494 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:47.494 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:47.494 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:47.494 Removing: /var/run/dpdk/spdk0 00:35:47.494 Removing: /var/run/dpdk/spdk_pid57488 00:35:47.494 Removing: /var/run/dpdk/spdk_pid57723 00:35:47.494 Removing: /var/run/dpdk/spdk_pid57953 00:35:47.494 Removing: /var/run/dpdk/spdk_pid58066 00:35:47.494 Removing: /var/run/dpdk/spdk_pid58112 00:35:47.494 Removing: /var/run/dpdk/spdk_pid58251 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58269 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58479 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58590 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58698 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58820 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58928 00:35:47.495 Removing: /var/run/dpdk/spdk_pid58967 00:35:47.495 Removing: /var/run/dpdk/spdk_pid59004 00:35:47.495 Removing: /var/run/dpdk/spdk_pid59080 00:35:47.495 Removing: /var/run/dpdk/spdk_pid59202 00:35:47.495 Removing: /var/run/dpdk/spdk_pid59645 00:35:47.754 Removing: /var/run/dpdk/spdk_pid59720 00:35:47.754 Removing: /var/run/dpdk/spdk_pid59796 00:35:47.754 Removing: /var/run/dpdk/spdk_pid59818 00:35:47.754 Removing: /var/run/dpdk/spdk_pid59967 00:35:47.754 Removing: /var/run/dpdk/spdk_pid59985 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60144 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60160 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60224 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60248 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60316 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60335 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60536 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60571 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60656 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60850 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60945 00:35:47.754 Removing: /var/run/dpdk/spdk_pid60987 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61439 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61537 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61657 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61710 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61736 00:35:47.754 Removing: /var/run/dpdk/spdk_pid61820 00:35:47.754 Removing: /var/run/dpdk/spdk_pid62463 00:35:47.754 Removing: /var/run/dpdk/spdk_pid62511 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63001 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63099 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63221 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63280 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63305 00:35:47.754 Removing: /var/run/dpdk/spdk_pid63336 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65220 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65369 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65379 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65396 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65437 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65441 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65453 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65503 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65507 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65519 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65564 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65568 00:35:47.754 Removing: /var/run/dpdk/spdk_pid65580 00:35:47.754 Removing: /var/run/dpdk/spdk_pid67017 00:35:47.754 Removing: /var/run/dpdk/spdk_pid67126 00:35:47.754 Removing: /var/run/dpdk/spdk_pid68558 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70307 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70388 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70463 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70578 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70674 00:35:47.754 Removing: /var/run/dpdk/spdk_pid70780 00:35:48.014 Removing: /var/run/dpdk/spdk_pid70871 00:35:48.014 Removing: /var/run/dpdk/spdk_pid70962 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71087 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71183 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71282 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71370 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71452 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71567 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71663 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71770 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71846 00:35:48.014 Removing: /var/run/dpdk/spdk_pid71930 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72040 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72132 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72233 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72313 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72387 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72467 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72550 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72659 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72759 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72858 00:35:48.014 Removing: /var/run/dpdk/spdk_pid72933 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73013 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73093 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73167 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73276 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73372 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73527 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73817 00:35:48.014 Removing: /var/run/dpdk/spdk_pid73859 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74313 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74508 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74611 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74721 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74780 00:35:48.014 Removing: /var/run/dpdk/spdk_pid74811 00:35:48.014 Removing: /var/run/dpdk/spdk_pid75101 00:35:48.014 Removing: /var/run/dpdk/spdk_pid75176 00:35:48.014 Removing: /var/run/dpdk/spdk_pid75269 00:35:48.014 Removing: /var/run/dpdk/spdk_pid75689 00:35:48.014 Removing: /var/run/dpdk/spdk_pid75833 00:35:48.014 Removing: /var/run/dpdk/spdk_pid76643 00:35:48.014 Removing: /var/run/dpdk/spdk_pid76786 00:35:48.014 Removing: /var/run/dpdk/spdk_pid76996 00:35:48.014 Removing: /var/run/dpdk/spdk_pid77104 00:35:48.014 Removing: /var/run/dpdk/spdk_pid77464 00:35:48.014 Removing: /var/run/dpdk/spdk_pid77753 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78121 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78321 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78462 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78526 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78670 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78710 00:35:48.014 Removing: /var/run/dpdk/spdk_pid78775 00:35:48.274 Removing: /var/run/dpdk/spdk_pid78986 00:35:48.274 Removing: /var/run/dpdk/spdk_pid79222 00:35:48.274 Removing: /var/run/dpdk/spdk_pid79684 00:35:48.274 Removing: /var/run/dpdk/spdk_pid80136 00:35:48.274 Removing: /var/run/dpdk/spdk_pid80593 00:35:48.274 Removing: /var/run/dpdk/spdk_pid81119 00:35:48.274 Removing: /var/run/dpdk/spdk_pid81269 00:35:48.274 Removing: /var/run/dpdk/spdk_pid81358 00:35:48.274 Removing: /var/run/dpdk/spdk_pid81990 00:35:48.274 Removing: /var/run/dpdk/spdk_pid82060 00:35:48.274 Removing: /var/run/dpdk/spdk_pid82540 00:35:48.274 Removing: /var/run/dpdk/spdk_pid82937 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83458 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83580 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83634 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83701 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83757 00:35:48.274 Removing: /var/run/dpdk/spdk_pid83821 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84013 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84097 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84168 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84241 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84281 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84358 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84528 00:35:48.274 Removing: /var/run/dpdk/spdk_pid84784 00:35:48.274 Removing: /var/run/dpdk/spdk_pid85231 00:35:48.274 Removing: /var/run/dpdk/spdk_pid85664 00:35:48.274 Removing: /var/run/dpdk/spdk_pid86111 00:35:48.274 Removing: /var/run/dpdk/spdk_pid86600 00:35:48.274 Clean 00:35:48.274 12:19:38 -- common/autotest_common.sh@1453 -- # return 0 00:35:48.274 12:19:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:48.274 12:19:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:48.274 12:19:38 -- common/autotest_common.sh@10 -- # set +x 00:35:48.534 12:19:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:48.534 12:19:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:48.534 12:19:38 -- common/autotest_common.sh@10 -- # set +x 00:35:48.534 12:19:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:48.534 12:19:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:48.534 12:19:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:48.534 12:19:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:48.534 12:19:38 -- spdk/autotest.sh@398 -- # hostname 00:35:48.534 12:19:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:48.793 geninfo: WARNING: invalid characters removed from testname! 00:36:15.357 12:20:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:15.357 12:20:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:16.737 12:20:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:19.274 12:20:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:21.182 12:20:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:23.088 12:20:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:25.625 12:20:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:25.625 12:20:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:25.625 12:20:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:25.625 12:20:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:25.625 12:20:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:25.625 12:20:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:25.625 + [[ -n 5245 ]] 00:36:25.625 + sudo kill 5245 00:36:25.634 [Pipeline] } 00:36:25.650 [Pipeline] // timeout 00:36:25.656 [Pipeline] } 00:36:25.667 [Pipeline] // stage 00:36:25.672 [Pipeline] } 00:36:25.683 [Pipeline] // catchError 00:36:25.692 [Pipeline] stage 00:36:25.693 [Pipeline] { (Stop VM) 00:36:25.705 [Pipeline] sh 00:36:25.988 + vagrant halt 00:36:29.280 ==> default: Halting domain... 00:36:35.859 [Pipeline] sh 00:36:36.145 + vagrant destroy -f 00:36:38.682 ==> default: Removing domain... 00:36:39.373 [Pipeline] sh 00:36:39.653 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:39.662 [Pipeline] } 00:36:39.676 [Pipeline] // stage 00:36:39.680 [Pipeline] } 00:36:39.693 [Pipeline] // dir 00:36:39.699 [Pipeline] } 00:36:39.712 [Pipeline] // wrap 00:36:39.718 [Pipeline] } 00:36:39.730 [Pipeline] // catchError 00:36:39.739 [Pipeline] stage 00:36:39.741 [Pipeline] { (Epilogue) 00:36:39.753 [Pipeline] sh 00:36:40.038 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:45.329 [Pipeline] catchError 00:36:45.331 [Pipeline] { 00:36:45.344 [Pipeline] sh 00:36:45.626 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:45.885 Artifacts sizes are good 00:36:45.894 [Pipeline] } 00:36:45.909 [Pipeline] // catchError 00:36:45.921 [Pipeline] archiveArtifacts 00:36:45.928 Archiving artifacts 00:36:46.045 [Pipeline] cleanWs 00:36:46.057 [WS-CLEANUP] Deleting project workspace... 00:36:46.057 [WS-CLEANUP] Deferred wipeout is used... 00:36:46.064 [WS-CLEANUP] done 00:36:46.066 [Pipeline] } 00:36:46.082 [Pipeline] // stage 00:36:46.087 [Pipeline] } 00:36:46.102 [Pipeline] // node 00:36:46.107 [Pipeline] End of Pipeline 00:36:46.143 Finished: SUCCESS